Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add BuildRequires to RHEL specfile #1

Merged
merged 1 commit into from
May 1, 2014
Merged

Conversation

spil-jasper
Copy link
Contributor

Added a build-time dependency on openssl-devel, so you can easily build
a source RPM and then pass it into mock for building in a clean build
environment (without manually having to install the buildrequires).

Added a build-time dependency on openssl-devel, so you can easily build
a source RPM and then pass it into mock for building in a clean build
environment (without manually having to install the buildrequires).

Signed-off-by: Jasper Capel <[email protected]>
@spil-jasper
Copy link
Contributor Author

Sorry for the late reaction. I added the sign-off line and updated the AUTHORS file.

Thanks,
Jasper

blp added a commit that referenced this pull request May 1, 2014
Add BuildRequires to RHEL specfile
@blp blp merged commit d73728e into openvswitch:master May 1, 2014
jessegross added a commit that referenced this pull request May 21, 2014
There are two problematic situations.

A deadlock can happen when is_percpu is false because it can get
interrupted while holding the spinlock. Then it executes
ovs_flow_stats_update() in softirq context which tries to get
the same lock.

The second sitation is that when is_percpu is true, the code
correctly disables BH but only for the local CPU, so the
following can happen when locking the remote CPU without
disabling BH:

       CPU#0                            CPU#1
  ovs_flow_stats_get()
   stats_read()
 +->spin_lock remote CPU#1        ovs_flow_stats_get()
 |  <interrupted>                  stats_read()
 |  ...                       +-->  spin_lock remote CPU#0
 |                            |     <interrupted>
 |  ovs_flow_stats_update()   |     ...
 |   spin_lock local CPU#0 <--+     ovs_flow_stats_update()
 +---------------------------------- spin_lock local CPU#1

This patch disables BH for both cases fixing the deadlocks.

=================================
[ INFO: inconsistent lock state ]
3.14.0-rc8-00007-g632b06a #1 Tainted: G          I
---------------------------------
inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
swapper/0/0 [HC0[0]:SC1[5]:HE1:SE0] takes:
(&(&cpu_stats->lock)->rlock){+.?...}, at: [<ffffffffa05dd8a1>] ovs_flow_stats_update+0x51/0xd0 [openvswitch]
{SOFTIRQ-ON-W} state was registered at:
[<ffffffff810f973f>] __lock_acquire+0x68f/0x1c40
[<ffffffff810fb4e2>] lock_acquire+0xa2/0x1d0
[<ffffffff817d8d9e>] _raw_spin_lock+0x3e/0x80
[<ffffffffa05dd9e4>] ovs_flow_stats_get+0xc4/0x1e0 [openvswitch]
[<ffffffffa05da855>] ovs_flow_cmd_fill_info+0x185/0x360 [openvswitch]
[<ffffffffa05daf05>] ovs_flow_cmd_build_info.constprop.27+0x55/0x90 [openvswitch]
[<ffffffffa05db41d>] ovs_flow_cmd_new_or_set+0x4dd/0x570 [openvswitch]
[<ffffffff816c245d>] genl_family_rcv_msg+0x1cd/0x3f0
[<ffffffff816c270e>] genl_rcv_msg+0x8e/0xd0
[<ffffffff816c0239>] netlink_rcv_skb+0xa9/0xc0
[<ffffffff816c0798>] genl_rcv+0x28/0x40
[<ffffffff816bf830>] netlink_unicast+0x100/0x1e0
[<ffffffff816bfc57>] netlink_sendmsg+0x347/0x770
[<ffffffff81668e9c>] sock_sendmsg+0x9c/0xe0
[<ffffffff816692d9>] ___sys_sendmsg+0x3a9/0x3c0
[<ffffffff8166a911>] __sys_sendmsg+0x51/0x90
[<ffffffff8166a962>] SyS_sendmsg+0x12/0x20
[<ffffffff817e3ce9>] system_call_fastpath+0x16/0x1b
irq event stamp: 1740726
hardirqs last  enabled at (1740726): [<ffffffff8175d5e0>] ip6_finish_output2+0x4f0/0x840
hardirqs last disabled at (1740725): [<ffffffff8175d59b>] ip6_finish_output2+0x4ab/0x840
softirqs last  enabled at (1740674): [<ffffffff8109be12>] _local_bh_enable+0x22/0x50
softirqs last disabled at (1740675): [<ffffffff8109db05>] irq_exit+0xc5/0xd0

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&(&cpu_stats->lock)->rlock);
  <Interrupt>
    lock(&(&cpu_stats->lock)->rlock);

 *** DEADLOCK ***

5 locks held by swapper/0/0:
 #0:  (((&ifa->dad_timer))){+.-...}, at: [<ffffffff810a7155>] call_timer_fn+0x5/0x320
 #1:  (rcu_read_lock){.+.+..}, at: [<ffffffff81788a55>] mld_sendpack+0x5/0x4a0
 #2:  (rcu_read_lock_bh){.+....}, at: [<ffffffff8175d149>] ip6_finish_output2+0x59/0x840
 #3:  (rcu_read_lock_bh){.+....}, at: [<ffffffff8168ba75>] __dev_queue_xmit+0x5/0x9b0
 #4:  (rcu_read_lock){.+.+..}, at: [<ffffffffa05e41b5>] internal_dev_xmit+0x5/0x110 [openvswitch]

stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Tainted: G          I  3.14.0-rc8-00007-g632b06a #1
Hardware name:                  /DX58SO, BIOS SOX5810J.86A.5599.2012.0529.2218 05/29/2012
 0000000000000000 0fcf20709903df0c ffff88042d603808 ffffffff817cfe3c
 ffffffff81c134c0 ffff88042d603858 ffffffff817cb6da 0000000000000005
 ffffffff00000001 ffff880400000000 0000000000000006 ffffffff81c134c0
Call Trace:
 <IRQ>  [<ffffffff817cfe3c>] dump_stack+0x4d/0x66
 [<ffffffff817cb6da>] print_usage_bug+0x1f4/0x205
 [<ffffffff810f7f10>] ? check_usage_backwards+0x180/0x180
 [<ffffffff810f8963>] mark_lock+0x223/0x2b0
 [<ffffffff810f96d3>] __lock_acquire+0x623/0x1c40
 [<ffffffff810f5707>] ? __lock_is_held+0x57/0x80
 [<ffffffffa05e26c6>] ? masked_flow_lookup+0x236/0x250 [openvswitch]
 [<ffffffff810fb4e2>] lock_acquire+0xa2/0x1d0
 [<ffffffffa05dd8a1>] ? ovs_flow_stats_update+0x51/0xd0 [openvswitch]
 [<ffffffff817d8d9e>] _raw_spin_lock+0x3e/0x80
 [<ffffffffa05dd8a1>] ? ovs_flow_stats_update+0x51/0xd0 [openvswitch]
 [<ffffffffa05dd8a1>] ovs_flow_stats_update+0x51/0xd0 [openvswitch]
 [<ffffffffa05dcc64>] ovs_dp_process_received_packet+0x84/0x120 [openvswitch]
 [<ffffffff810f93f7>] ? __lock_acquire+0x347/0x1c40
 [<ffffffffa05e3bea>] ovs_vport_receive+0x2a/0x30 [openvswitch]
 [<ffffffffa05e4218>] internal_dev_xmit+0x68/0x110 [openvswitch]
 [<ffffffffa05e41b5>] ? internal_dev_xmit+0x5/0x110 [openvswitch]
 [<ffffffff8168b4a6>] dev_hard_start_xmit+0x2e6/0x8b0
 [<ffffffff8168be87>] __dev_queue_xmit+0x417/0x9b0
 [<ffffffff8168ba75>] ? __dev_queue_xmit+0x5/0x9b0
 [<ffffffff8175d5e0>] ? ip6_finish_output2+0x4f0/0x840
 [<ffffffff8168c430>] dev_queue_xmit+0x10/0x20
 [<ffffffff8175d641>] ip6_finish_output2+0x551/0x840
 [<ffffffff8176128a>] ? ip6_finish_output+0x9a/0x220
 [<ffffffff8176128a>] ip6_finish_output+0x9a/0x220
 [<ffffffff8176145f>] ip6_output+0x4f/0x1f0
 [<ffffffff81788c29>] mld_sendpack+0x1d9/0x4a0
 [<ffffffff817895b8>] mld_send_initial_cr.part.32+0x88/0xa0
 [<ffffffff817691b0>] ? addrconf_dad_completed+0x220/0x220
 [<ffffffff8178e301>] ipv6_mc_dad_complete+0x31/0x50
 [<ffffffff817690d7>] addrconf_dad_completed+0x147/0x220
 [<ffffffff817691b0>] ? addrconf_dad_completed+0x220/0x220
 [<ffffffff8176934f>] addrconf_dad_timer+0x19f/0x1c0
 [<ffffffff810a71e9>] call_timer_fn+0x99/0x320
 [<ffffffff810a7155>] ? call_timer_fn+0x5/0x320
 [<ffffffff817691b0>] ? addrconf_dad_completed+0x220/0x220
 [<ffffffff810a76c4>] run_timer_softirq+0x254/0x3b0
 [<ffffffff8109d47d>] __do_softirq+0x12d/0x480

Signed-off-by: Flavio Leitner <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Jesse Gross <[email protected]>
blp added a commit that referenced this pull request Aug 6, 2014
Otherwise creating the first dpif-netdev bridge fails because there are
no handlers:

  Program terminated with signal 8, Arithmetic exception.
  #0  0x080971e9 in dp_execute_cb (aux_=aux_@entry=0xffcfaa54,
      packet=packet@entry=0xffcfac54, md=md@entry=0xffcfac84,
      a=a@entry=0x8f58930, may_steal=false) at ../lib/dpif-netdev.c:2154
  #1  0x080b5adb in odp_execute_actions__ (dp=dp@entry=0xffcfaa54,
      packet=packet@entry=0xffcfac54, steal=steal@entry=false,
      md=md@entry=0xffcfac84, actions=actions@entry=0x8f58930,
      actions_len=actions_len@entry=20,
      dp_execute_action=dp_execute_action@entry=0x8097040 <dp_execute_cb>,
      more_actions=more_actions@entry=false) at ../lib/odp-execute.c:218
  #2  0x080b5def in odp_execute_actions (dp=dp@entry=0xffcfaa54,
      packet=packet@entry=0xffcfac54, steal=steal@entry=false,
      md=md@entry=0xffcfac84, actions=0x8f58930, actions_len=20,
      dp_execute_action=dp_execute_action@entry=0x8097040 <dp_execute_cb>)
      at ../lib/odp-execute.c:285
  #3  0x08095098 in dp_netdev_execute_actions (actions_len=<optimized out>,
      actions=<optimized out>, md=0xffcfac84, may_steal=false,
      packet=0xffcfac54, key=0xffcfaa5c, dp=<optimized out>)
      at ../lib/dpif-netdev.c:2227
  #4  dpif_netdev_execute (dpif=0x8f59598, execute=0xffcfac78)
      at ../lib/dpif-netdev.c:1551
  #5  0x0809a56c in dpif_execute (dpif=0x8f59598,
      execute=execute@entry=0xffcfac78) at ../lib/dpif.c:1227
  #6  0x08071071 in check_variable_length_userdata (backer=<optimized out>)
      at ../ofproto/ofproto-dpif.c:1040
  #7  open_dpif_backer (backerp=0x8f5834c, type=<optimized out>)
      at ../ofproto/ofproto-dpif.c:921
  #8  construct (ofproto_=0x8f581c0) at ../ofproto/ofproto-dpif.c:1120
  #9  0x080675e0 in ofproto_create (datapath_name=0x8f57310 "br0",
      datapath_type=<optimized out>, ofprotop=ofprotop@entry=0x8f576c8)
      at ../ofproto/ofproto.c:564
  #10 0x080529aa in bridge_reconfigure (ovs_cfg=ovs_cfg@entry=0x8f596d8)
      at ../vswitchd/bridge.c:572
  #11 0x08055e33 in bridge_run () at ../vswitchd/bridge.c:2339
  #12 0x0804cdbd in main (argc=9, argv=0xffcfb554)
      at ../vswitchd/ovs-vswitchd.c:116

This bug was introduced by commit 9bcefb9 (ofproto-dpif: fix an ovs
crash when dpif_recv_set returns error).

CC: Andy Zhou <[email protected]>
Signed-off-by: Ben Pfaff <[email protected]>
Acked-by: Justin Pettit <[email protected]>
pshelar pushed a commit that referenced this pull request Sep 23, 2014
OVS needs to segments large skb before sending it for miss
packet handling to userspace. but skb_gso_segment uses
skb->cb. This corrupted OVS_CB which result in following panic.

[  735.419921] BUG: unable to handle kernel paging request at 00000014000001b2
[  735.423168] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[  735.445097] RIP: 0010:[<ffffffffa05df0d7>]  [<ffffffffa05df0d7>] ovs_nla_put_flow+0x37/0x7c0 [openvswitch]
[  735.468858] Call Trace:
[  735.470384]  [<ffffffffa05d7ec2>] queue_userspace_packet+0x332/0x4d0 [openvswitch]
[  735.471741]  [<ffffffffa05d8155>] queue_gso_packets+0xf5/0x180 [openvswitch]
[  735.481862]  [<ffffffffa05da9f5>] ovs_dp_upcall+0x65/0x70 [openvswitch]
[  735.483031]  [<ffffffffa05dab81>] ovs_dp_process_packet+0x181/0x1b0 [openvswitch]
[  735.484391]  [<ffffffffa05e2f55>] ovs_vport_receive+0x65/0x90 [openvswitch]
[  735.492638]  [<ffffffffa05e5738>] internal_dev_xmit+0x68/0x110 [openvswitch]
[  735.495334]  [<ffffffff81588eb6>] dev_hard_start_xmit+0x2e6/0x8b0
[  735.496503]  [<ffffffff81589847>] __dev_queue_xmit+0x3c7/0x920
[  735.499827]  [<ffffffff81589db0>] dev_queue_xmit+0x10/0x20
[  735.500798]  [<ffffffff815d3b60>] ip_finish_output+0x6a0/0x950
[  735.502818]  [<ffffffff815d55f8>] ip_output+0x68/0x110
[  735.503835]  [<ffffffff815d4979>] ip_local_out+0x29/0x90
[  735.504801]  [<ffffffff815d4e46>] ip_queue_xmit+0x1d6/0x640
[  735.507015]  [<ffffffff815ee0d7>] tcp_transmit_skb+0x477/0xac0
[  735.508260]  [<ffffffff815ee856>] tcp_write_xmit+0x136/0xba0
[  735.510829]  [<ffffffff815ef56e>] __tcp_push_pending_frames+0x2e/0xc0
[  735.512296]  [<ffffffff815e0593>] tcp_sendmsg+0xa63/0xd50
[  735.513526]  [<ffffffff81612c2c>] inet_sendmsg+0x10c/0x220
[  735.516025]  [<ffffffff81566b8c>] sock_sendmsg+0x9c/0xe0
[  735.518066]  [<ffffffff81566d41>] SYSC_sendto+0x121/0x1c0
[  735.521398]  [<ffffffff8156801e>] SyS_sendto+0xe/0x10
[  735.522473]  [<ffffffff816df5e9>] system_call_fastpath+0x16/0x1b

Reported-by: Andy Zhou <[email protected]>
Signed-off-by: Pravin B Shelar <[email protected]>
Acked-by: Andy Zhou <[email protected]>
pshelar pushed a commit that referenced this pull request Sep 23, 2014
OVS needs to segments large skb before sending it for miss
packet handling to userspace. but skb_gso_segment uses
skb->cb. This corrupted OVS_CB which result in following panic.

[  735.419921] BUG: unable to handle kernel paging request at 00000014000001b2
[  735.423168] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[  735.445097] RIP: 0010:[<ffffffffa05df0d7>]  [<ffffffffa05df0d7>] ovs_nla_put_flow+0x37/0x7c0 [openvswitch]
[  735.468858] Call Trace:
[  735.470384]  [<ffffffffa05d7ec2>] queue_userspace_packet+0x332/0x4d0 [openvswitch]
[  735.471741]  [<ffffffffa05d8155>] queue_gso_packets+0xf5/0x180 [openvswitch]
[  735.481862]  [<ffffffffa05da9f5>] ovs_dp_upcall+0x65/0x70 [openvswitch]
[  735.483031]  [<ffffffffa05dab81>] ovs_dp_process_packet+0x181/0x1b0 [openvswitch]
[  735.484391]  [<ffffffffa05e2f55>] ovs_vport_receive+0x65/0x90 [openvswitch]
[  735.492638]  [<ffffffffa05e5738>] internal_dev_xmit+0x68/0x110 [openvswitch]
[  735.495334]  [<ffffffff81588eb6>] dev_hard_start_xmit+0x2e6/0x8b0
[  735.496503]  [<ffffffff81589847>] __dev_queue_xmit+0x3c7/0x920
[  735.499827]  [<ffffffff81589db0>] dev_queue_xmit+0x10/0x20
[  735.500798]  [<ffffffff815d3b60>] ip_finish_output+0x6a0/0x950
[  735.502818]  [<ffffffff815d55f8>] ip_output+0x68/0x110
[  735.503835]  [<ffffffff815d4979>] ip_local_out+0x29/0x90
[  735.504801]  [<ffffffff815d4e46>] ip_queue_xmit+0x1d6/0x640
[  735.507015]  [<ffffffff815ee0d7>] tcp_transmit_skb+0x477/0xac0
[  735.508260]  [<ffffffff815ee856>] tcp_write_xmit+0x136/0xba0
[  735.510829]  [<ffffffff815ef56e>] __tcp_push_pending_frames+0x2e/0xc0
[  735.512296]  [<ffffffff815e0593>] tcp_sendmsg+0xa63/0xd50
[  735.513526]  [<ffffffff81612c2c>] inet_sendmsg+0x10c/0x220
[  735.516025]  [<ffffffff81566b8c>] sock_sendmsg+0x9c/0xe0
[  735.518066]  [<ffffffff81566d41>] SYSC_sendto+0x121/0x1c0
[  735.521398]  [<ffffffff8156801e>] SyS_sendto+0xe/0x10
[  735.522473]  [<ffffffff816df5e9>] system_call_fastpath+0x16/0x1b

Reported-by: Andy Zhou <[email protected]>
Signed-off-by: Pravin B Shelar <[email protected]>
Acked-by: Andy Zhou <[email protected]>
pshelar pushed a commit that referenced this pull request Sep 23, 2014
OVS needs to segments large skb before sending it for miss
packet handling to userspace. but skb_gso_segment uses
skb->cb. This corrupted OVS_CB which result in following panic.

[  735.419921] BUG: unable to handle kernel paging request at 00000014000001b2
[  735.423168] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
[  735.445097] RIP: 0010:[<ffffffffa05df0d7>]  [<ffffffffa05df0d7>] ovs_nla_put_flow+0x37/0x7c0 [openvswitch]
[  735.468858] Call Trace:
[  735.470384]  [<ffffffffa05d7ec2>] queue_userspace_packet+0x332/0x4d0 [openvswitch]
[  735.471741]  [<ffffffffa05d8155>] queue_gso_packets+0xf5/0x180 [openvswitch]
[  735.481862]  [<ffffffffa05da9f5>] ovs_dp_upcall+0x65/0x70 [openvswitch]
[  735.483031]  [<ffffffffa05dab81>] ovs_dp_process_packet+0x181/0x1b0 [openvswitch]
[  735.484391]  [<ffffffffa05e2f55>] ovs_vport_receive+0x65/0x90 [openvswitch]
[  735.492638]  [<ffffffffa05e5738>] internal_dev_xmit+0x68/0x110 [openvswitch]
[  735.495334]  [<ffffffff81588eb6>] dev_hard_start_xmit+0x2e6/0x8b0
[  735.496503]  [<ffffffff81589847>] __dev_queue_xmit+0x3c7/0x920
[  735.499827]  [<ffffffff81589db0>] dev_queue_xmit+0x10/0x20
[  735.500798]  [<ffffffff815d3b60>] ip_finish_output+0x6a0/0x950
[  735.502818]  [<ffffffff815d55f8>] ip_output+0x68/0x110
[  735.503835]  [<ffffffff815d4979>] ip_local_out+0x29/0x90
[  735.504801]  [<ffffffff815d4e46>] ip_queue_xmit+0x1d6/0x640
[  735.507015]  [<ffffffff815ee0d7>] tcp_transmit_skb+0x477/0xac0
[  735.508260]  [<ffffffff815ee856>] tcp_write_xmit+0x136/0xba0
[  735.510829]  [<ffffffff815ef56e>] __tcp_push_pending_frames+0x2e/0xc0
[  735.512296]  [<ffffffff815e0593>] tcp_sendmsg+0xa63/0xd50
[  735.513526]  [<ffffffff81612c2c>] inet_sendmsg+0x10c/0x220
[  735.516025]  [<ffffffff81566b8c>] sock_sendmsg+0x9c/0xe0
[  735.518066]  [<ffffffff81566d41>] SYSC_sendto+0x121/0x1c0
[  735.521398]  [<ffffffff8156801e>] SyS_sendto+0xe/0x10
[  735.522473]  [<ffffffff816df5e9>] system_call_fastpath+0x16/0x1b

Reported-by: Andy Zhou <[email protected]>
Signed-off-by: Pravin B Shelar <[email protected]>
Acked-by: Andy Zhou <[email protected]>
blp pushed a commit that referenced this pull request Oct 6, 2014
Primary goals of netdev-windows.c are:
1) To query the 'network device' information of a vport such as MTU, etc.
2) Monitor changes to the 'network device' information such as link
status.

In this change, we implement only #1. #2 can also be implemented, but it
does not seem to be required for the purposes of implement
'ovs-dpctl.exe show'.

Signed-off-by: Nithin Raju <[email protected]>
Acked-by: Ankur Sharma <[email protected]>
Acked-by: Alin Gabriel Serdean <[email protected]>
Tested-by: Alin Gabriel Serdean <[email protected]>
Signed-off-by: Ben Pfaff <[email protected]>
blp pushed a commit that referenced this pull request Oct 28, 2014
In this patch, we make the following updates to the vport add code:

1. Clarify the roles of the different hash tables, so it is easier to
   read/write code in the future.
2. Update OvsNewVportCmdHandler() to support adding bridge-internal
   ports.
3. Fixes in OvsNewVportCmdHandler() to support adding external port.
   Earlier, we'd hit ASSERTs.
4. I could not figure out way to add a port of type
   OVS_PORT_TYPE_INTERNAL with name "internal" to the confdb using
   ovs-vsctl.exe. And, this is needed in order to add the Hyper-V
   internal port from userspace. To workaround this problem, we treat a
   port of type OVS_PORT_TYPE_NETDEV with name "internal" as a request
   to add the Hyper-V internal port. This is a workaround. The end
   result is that there's a discrepancy between the port type in the
   datpaath v/s confdb, but this has not created any trouble in testing
   so far. If this ends up becoming an issue, we can mark the Hyper-V
   internal port to be of type OVS_PORT_TYPE_NETDEV. No harm.
5. Because of changes indicated in #1, we also update the vport dump
   code to look at the correct hash table for ports added from
   userspace.
6. Add a OvsGetTunnelVport() for convenience.
7. Update ASSERTs() while cleaning up the switch.
8. Nuke OvsGetExternalVport() and OvsGetExternalMtu().

Signed-off-by: Nithin Raju <[email protected]>
Acked-by: Alin Gabriel Serdean <[email protected]>
Tested-by: Alin Gabriel Serdean <[email protected]>
Acked-by: Ankur Sharma <[email protected]>
Signed-off-by: Ben Pfaff <[email protected]>
pshelar pushed a commit that referenced this pull request Feb 20, 2015
Open vSwitch allows moving internal vport to different namespace
while still connected to the bridge. But when namespace deleted
OVS does not detach these vports, that results in dangling
pointer to netdevice which causes kernel panic as follows.
This issue is fixed by detaching all ovs ports from the deleted
namespace at net-exit.

BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
IP: [<ffffffffa0aadaa5>] ovs_vport_locate+0x35/0x80 [openvswitch]
Oops: 0000 [#1] SMP
Call Trace:
 [<ffffffffa0aa6391>] lookup_vport+0x21/0xd0 [openvswitch]
 [<ffffffffa0aa65f9>] ovs_vport_cmd_get+0x59/0xf0 [openvswitch]
 [<ffffffff8167e07c>] genl_family_rcv_msg+0x1bc/0x3e0
 [<ffffffff8167e319>] genl_rcv_msg+0x79/0xc0
 [<ffffffff8167d919>] netlink_rcv_skb+0xb9/0xe0
 [<ffffffff8167deac>] genl_rcv+0x2c/0x40
 [<ffffffff8167cffd>] netlink_unicast+0x12d/0x1c0
 [<ffffffff8167d3da>] netlink_sendmsg+0x34a/0x6b0
 [<ffffffff8162e140>] sock_sendmsg+0xa0/0xe0
 [<ffffffff8162e5e8>] ___sys_sendmsg+0x408/0x420
 [<ffffffff8162f541>] __sys_sendmsg+0x51/0x90
 [<ffffffff8162f592>] SyS_sendmsg+0x12/0x20
 [<ffffffff81764ee9>] system_call_fastpath+0x12/0x17

Reported-by: Assaf Muller <[email protected]>
Fixes: 46df7b81454("openvswitch: Add support for network namespaces.")
Signed-off-by: Pravin B Shelar <[email protected]>
Reviewed-by: Thomas Graf <[email protected]>
Signed-off-by: David S. Miller <[email protected]>

Upstream: 7b4577a9da ("openvswitch: Fix net exit").
Acked-by: Andy Zhou <[email protected]>
pshelar pushed a commit that referenced this pull request Feb 20, 2015
Open vSwitch allows moving internal vport to different namespace
while still connected to the bridge. But when namespace deleted
OVS does not detach these vports, that results in dangling
pointer to netdevice which causes kernel panic as follows.
This issue is fixed by detaching all ovs ports from the deleted
namespace at net-exit.

BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
IP: [<ffffffffa0aadaa5>] ovs_vport_locate+0x35/0x80 [openvswitch]
Oops: 0000 [#1] SMP
Call Trace:
 [<ffffffffa0aa6391>] lookup_vport+0x21/0xd0 [openvswitch]
 [<ffffffffa0aa65f9>] ovs_vport_cmd_get+0x59/0xf0 [openvswitch]
 [<ffffffff8167e07c>] genl_family_rcv_msg+0x1bc/0x3e0
 [<ffffffff8167e319>] genl_rcv_msg+0x79/0xc0
 [<ffffffff8167d919>] netlink_rcv_skb+0xb9/0xe0
 [<ffffffff8167deac>] genl_rcv+0x2c/0x40
 [<ffffffff8167cffd>] netlink_unicast+0x12d/0x1c0
 [<ffffffff8167d3da>] netlink_sendmsg+0x34a/0x6b0
 [<ffffffff8162e140>] sock_sendmsg+0xa0/0xe0
 [<ffffffff8162e5e8>] ___sys_sendmsg+0x408/0x420
 [<ffffffff8162f541>] __sys_sendmsg+0x51/0x90
 [<ffffffff8162f592>] SyS_sendmsg+0x12/0x20
 [<ffffffff81764ee9>] system_call_fastpath+0x12/0x17

Reported-by: Assaf Muller <[email protected]>
Fixes: 46df7b81454("openvswitch: Add support for network namespaces.")
Signed-off-by: Pravin B Shelar <[email protected]>
Reviewed-by: Thomas Graf <[email protected]>
Signed-off-by: David S. Miller <[email protected]>

Upstream: 7b4577a9da ("openvswitch: Fix net exit").
Acked-by: Andy Zhou <[email protected]>
pshelar pushed a commit that referenced this pull request Feb 20, 2015
Open vSwitch allows moving internal vport to different namespace
while still connected to the bridge. But when namespace deleted
OVS does not detach these vports, that results in dangling
pointer to netdevice which causes kernel panic as follows.
This issue is fixed by detaching all ovs ports from the deleted
namespace at net-exit.

BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
IP: [<ffffffffa0aadaa5>] ovs_vport_locate+0x35/0x80 [openvswitch]
Oops: 0000 [#1] SMP
Call Trace:
 [<ffffffffa0aa6391>] lookup_vport+0x21/0xd0 [openvswitch]
 [<ffffffffa0aa65f9>] ovs_vport_cmd_get+0x59/0xf0 [openvswitch]
 [<ffffffff8167e07c>] genl_family_rcv_msg+0x1bc/0x3e0
 [<ffffffff8167e319>] genl_rcv_msg+0x79/0xc0
 [<ffffffff8167d919>] netlink_rcv_skb+0xb9/0xe0
 [<ffffffff8167deac>] genl_rcv+0x2c/0x40
 [<ffffffff8167cffd>] netlink_unicast+0x12d/0x1c0
 [<ffffffff8167d3da>] netlink_sendmsg+0x34a/0x6b0
 [<ffffffff8162e140>] sock_sendmsg+0xa0/0xe0
 [<ffffffff8162e5e8>] ___sys_sendmsg+0x408/0x420
 [<ffffffff8162f541>] __sys_sendmsg+0x51/0x90
 [<ffffffff8162f592>] SyS_sendmsg+0x12/0x20
 [<ffffffff81764ee9>] system_call_fastpath+0x12/0x17

Reported-by: Assaf Muller <[email protected]>
Fixes: 46df7b81454("openvswitch: Add support for network namespaces.")
Signed-off-by: Pravin B Shelar <[email protected]>
Reviewed-by: Thomas Graf <[email protected]>
Signed-off-by: David S. Miller <[email protected]>

Upstream: 7b4577a9da ("openvswitch: Fix net exit").
Acked-by: Andy Zhou <[email protected]>
pshelar pushed a commit that referenced this pull request Feb 27, 2015
Flow alloc needs to initialize unmasked key pointer. Otherwise
it can crash kernel trying to free random unmasked-key pointer.

general protection fault: 0000 [#1] SMP
3.19.0-rc6-net-next+ #457
Hardware name: Supermicro X7DWU/X7DWU, BIOS  1.1 04/30/2008
RIP: 0010:[<ffffffff8111df0e>] [<ffffffff8111df0e>] kfree+0xac/0x196
Call Trace:
 [<ffffffffa060bd87>] flow_free+0x21/0x59 [openvswitch]
 [<ffffffffa060bde0>] ovs_flow_free+0x21/0x23 [openvswitch]
 [<ffffffffa0605b4a>] ovs_packet_cmd_execute+0x2f3/0x35f [openvswitch]
 [<ffffffffa0605995>] ? ovs_packet_cmd_execute+0x13e/0x35f [openvswitch]
 [<ffffffff811fe6fb>] ? nla_parse+0x4f/0xec
 [<ffffffff8139a2fc>] genl_family_rcv_msg+0x26d/0x2c9
 [<ffffffff8107620f>] ? __lock_acquire+0x90e/0x9aa
 [<ffffffff8139a3be>] genl_rcv_msg+0x66/0x89
 [<ffffffff8139a358>] ? genl_family_rcv_msg+0x2c9/0x2c9
 [<ffffffff81399591>] netlink_rcv_skb+0x3e/0x95
 [<ffffffff81399898>] ? genl_rcv+0x18/0x37
 [<ffffffff813998a7>] genl_rcv+0x27/0x37
 [<ffffffff81399033>] netlink_unicast+0x103/0x191
 [<ffffffff81399382>] netlink_sendmsg+0x2c1/0x310
 [<ffffffff811007ad>] ? might_fault+0x50/0xa0
 [<ffffffff8135c773>] do_sock_sendmsg+0x5f/0x7a
 [<ffffffff8135c799>] sock_sendmsg+0xb/0xd
 [<ffffffff8135cacf>] ___sys_sendmsg+0x1a3/0x218
 [<ffffffff8113e54b>] ? get_close_on_exec+0x86/0x86
 [<ffffffff8115a9d0>] ? fsnotify+0x32c/0x348
 [<ffffffff8115a720>] ? fsnotify+0x7c/0x348
 [<ffffffff8113e5f5>] ? __fget+0xaa/0xbf
 [<ffffffff8113e54b>] ? get_close_on_exec+0x86/0x86
 [<ffffffff8135cccd>] __sys_sendmsg+0x3d/0x5e
 [<ffffffff8135cd02>] SyS_sendmsg+0x14/0x16
 [<ffffffff81411852>] system_call_fastpath+0x12/0x17

Reported-by: Or Gerlitz <[email protected]>
Signed-off-by: Pravin B Shelar <[email protected]>
russellb referenced this pull request in russellb/ovs Aug 24, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

You could imagine an alternative configuration where there are many OVN
logical ports with a single OVN "localnet" logical port on the same OVN
logical switch.  This setup provides something different, where the
logical ports would communicate with eath other in logical space via
tunnnels between hypervisors.  For Neutron's use case, we want all ports
communicating via an existing network without the use of an overlay.

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-1-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
  > in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
  > in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
  > in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
  > in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
  > in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Aug 25, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

You could imagine an alternative configuration where there are many OVN
logical ports with a single OVN "localnet" logical port on the same OVN
logical switch.  This setup provides something different, where the
logical ports would communicate with eath other in logical space via
tunnnels between hypervisors.  For Neutron's use case, we want all ports
communicating via an existing network without the use of an overlay.

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Aug 25, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

You could imagine an alternative configuration where there are many OVN
logical ports with a single OVN "localnet" logical port on the same OVN
logical switch.  This setup provides something different, where the
logical ports would communicate with eath other in logical space via
tunnnels between hypervisors.  For Neutron's use case, we want all ports
communicating via an existing network without the use of an overlay.

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Aug 25, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

You could imagine an alternative configuration where there are many OVN
logical ports with a single OVN "localnet" logical port on the same OVN
logical switch.  This setup provides something different, where the
logical ports would communicate with eath other in logical space via
tunnnels between hypervisors.  For Neutron's use case, we want all ports
communicating via an existing network without the use of an overlay.

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Aug 26, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

You could imagine an alternative configuration where there are many OVN
logical ports with a single OVN "localnet" logical port on the same OVN
logical switch.  This setup provides something different, where the
logical ports would communicate with eath other in logical space via
tunnnels between hypervisors.  For Neutron's use case, we want all ports
communicating via an existing network without the use of an overlay.

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Sep 2, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

You could imagine an alternative configuration where there are many OVN
logical ports with a single OVN "localnet" logical port on the same OVN
logical switch.  This setup provides something different, where the
logical ports would communicate with eath other in logical space via
tunnnels between hypervisors.  For Neutron's use case, we want all ports
communicating via an existing network without the use of an overlay.

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Sep 2, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

You could imagine an alternative configuration where there are many OVN
logical ports with a single OVN "localnet" logical port on the same OVN
logical switch.  This setup provides something different, where the
logical ports would communicate with eath other in logical space via
tunnnels between hypervisors.  For Neutron's use case, we want all ports
communicating via an existing network without the use of an overlay.

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Sep 2, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Sep 2, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Sep 3, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
russellb referenced this pull request in russellb/ovs Sep 3, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
blp pushed a commit that referenced this pull request Sep 8, 2015
Introduce a new logical port type called "localnet".  A logical port
with this type also has an option called "network_name".  A "localnet"
logical port represents a connection to a network that is locally
accessible from each chassis running ovn-controller.  ovn-controller
will use the ovn-bridge-mappings configuration to figure out which
patch port on br-int should be used for this port.

OpenStack Neutron has an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

As a specific example, consider an environment with two hypvervisors
(A and B) with two VMs on each hypervisor (A1, A2, B1, B2).  Now imagine
that the desired setup from an OpenStack perspective is to have all of
these VMs attached to the same provider network, which is a physical
network we'll refer to as "physnet1".

The first step here is to configure each hypervisor with bridge mappings
that tell ovn-controller that a local bridge called "br-eth1" is used to
reach the network called "physnet1".  We can simulate the inital setup
of this environment in ovs-sandbox with the following commands:

  # Setup the local hypervisor (A)
  ovs-vsctl add-br br-eth1
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

  # Create a fake remote hypervisor (B)
  ovn-sbctl chassis-add fakechassis geneve 127.0.0.1

To get the behavior we want, we model every Neutron port connected to a
Neutron provider network as an OVN logical switch with 2 ports.  The
first port is a normal logical port to be used by the VM.  The second
logical port is a special port with its type set to "localnet".

To simulate the creation of the OVN logical switches and OVN logical
ports for A1, A2, B1, and B2, you can run the following commands:

  # Create 4 OVN logical switches.  Each logical switch has 2 ports,
  # port1 for a VM and physnet1 for the existing network we are
  # connecting to.
  for n in 1 2 3 4; do
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

  # Bind lport1 (A1) and lport2 (A2) to the local hypervisor.
  ovs-vsctl add-port br-int lport1 -- set Interface lport1 external_ids:iface-id=provnet1-1-port1
  ovs-vsctl add-port br-int lport2 -- set Interface lport2 external_ids:iface-id=provnet1-2-port1

  # Bind the other 2 ports to the fake remote hypervisor.
  ovn-sbctl lport-bind provnet1-3-port1 fakechassis
  ovn-sbctl lport-bind provnet1-4-port1 fakechassis

After running these commands, we have the following logical
configuration:

  $ ovn-nbctl show
    lswitch 035645fc-b2ff-4e26-b953-69addba80a9a (provnet1-4)
        lport provnet1-4-physnet1
            macs: unknown
        lport provnet1-4-port1
            macs: 00:00:00:00:00:04
    lswitch 66212a85-b3b6-4688-bcf6-8062941a2d96 (provnet1-2)
        lport provnet1-2-physnet1
            macs: unknown
        lport provnet1-2-port1
            macs: 00:00:00:00:00:02
    lswitch fc5b1141-0216-4fa7-86f3-461811c1fc9b (provnet1-3)
        lport provnet1-3-physnet1
            macs: unknown
        lport provnet1-3-port1
            macs: 00:00:00:00:00:03
    lswitch 9b1d2636-e654-4d43-84e8-a921af611b33 (provnet1-1)
        lport provnet1-1-physnet1
            macs: unknown
        lport provnet1-1-port1
            macs: 00:00:00:00:00:01

We can also look at OVN_Southbound to see that 2 logical ports are bound
to each hypervisor:

  $ ovn-sbctl show
  Chassis "56b18105-5706-46ef-80c4-ff20979ab068"
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-1-port1"
      Port_Binding "provnet1-2-port1"
  Chassis fakechassis
      Encap geneve
          ip: "127.0.0.1"
      Port_Binding "provnet1-3-port1"
      Port_Binding "provnet1-4-port1"

Now we can generate several packets to test how a packet would be
processed on hypervisor A.  The OpenFlow port numbers in this demo are:

  1 - patch port to br-eth1 (physnet1)
  2 - tunnel to fakechassis
  3 - lport1 (A1)
  4 - lport2 (A2)

Packet test #1: A1 to A2 - This will be output to ofport 1.  Despite
both VMs being local to this hypervisor, all packets betwen the VMs go
through physnet1.  In practice, this will get optimized at br-eth1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #2: physnet1 to A2 - Consider this a continuation of test
is attached to will be considered.  The end result should be that the
only output is to ofport 4 (A2).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02 -generate

Packet test #3: A1 to B1 - This will be output to ofport 1, as physnet1
is to be used to reach any other port.  When it arrives at hypervisor B,
processing would look just like test #2.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 -generate

Packet test #4: A1 broadcast. - Again, the packet will only be sent to
physnet1.

  ovs-appctl ofproto/trace br-int \
    in_port=3,dl_src=00:00:00:00:00:01,dl_dst=ff:ff:ff:ff:ff:ff -generate

Packet test #5: B1 broadcast arriving at hypervisor A.  This is somewhat
a continuation of test #4.  When a broadcast packet arrives from
physnet1 on hypervisor A, we should see it output to both A1 and A2
(ofports 3 and 4).

  ovs-appctl ofproto/trace br-int \
    in_port=1,dl_src=00:00:00:00:00:03,dl_dst=ff:ff:ff:ff:ff:ff -generate

Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Ben Pfaff <[email protected]>
pshelar pushed a commit that referenced this pull request Dec 17, 2015
Following crash was reported for STT tunnel. I am not able to reproduce
it, But the usage of wrong list manipulation API is likely culprit.

---8<---
IP: [<ffffffffc0e731fd>] nf_ip_hook+0xfd/0x180 [openvswitch]
Oops: 0000 [#1] PREEMPT SMP
Hardware name: VMware, Inc. VMware Virtual Platform/440BX
RIP: 0010:[<ffffffffc0e731fd>]  [<ffffffffc0e731fd>] nf_ip_hook+0xfd/0x180 [openvswitch]
RSP: 0018:ffff88043fd03cd0  EFLAGS: 00010206
RAX: 0000000000000000 RBX: ffff8801008e2200 RCX: 0000000000000034
RDX: 0000000000000110 RSI: ffff8801008e2200 RDI: ffff8801533a3880
RBP: ffff88043fd03d00 R08: ffffffff90646d10 R09: ffff880164b27000
R10: 0000000000000003 R11: ffff880155eb9dd8 R12: 0000000000000028
R13: ffff8802283dc580 R14: 00000000000076b4 R15: ffff880013b20000
FS:  00007ff5ba73b700(0000) GS:ffff88043fd00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000020 CR3: 000000037ff96000 CR4: 00000000000007e0
Stack:
 ffff8801533a3890 ffff88043fd03d80 ffffffff90646d10 0000000000000000
 ffff880164b27000 ffff8801008e2200 ffff88043fd03d48 ffffffff9064050a
 ffffffff90d0f930 ffffffffc0e7ef80 0000000000000001 ffff8801008e2200
Call Trace:
 <IRQ>
 [<ffffffff90646d10>] ? ip_rcv_finish+0x350/0x350
 [<ffffffff9064050a>] nf_iterate+0x9a/0xb0
 [<ffffffff90646d10>] ? ip_rcv_finish+0x350/0x350
 [<ffffffff9064059c>] nf_hook_slow+0x7c/0x120
 [<ffffffff90646d10>] ? ip_rcv_finish+0x350/0x350
 [<ffffffff906470f3>] ip_local_deliver+0x73/0x80
 [<ffffffff90646a3d>] ip_rcv_finish+0x7d/0x350
 [<ffffffff90647398>] ip_rcv+0x298/0x3d0
 [<ffffffff9060fc56>] __netif_receive_skb_core+0x696/0x880
 [<ffffffff9060fe58>] __netif_receive_skb+0x18/0x60
 [<ffffffff90610b3e>] process_backlog+0xae/0x180
 [<ffffffff906102c2>] net_rx_action+0x152/0x270
 [<ffffffff9006d625>] __do_softirq+0xf5/0x320
 [<ffffffff9071d15c>] do_softirq_own_stack+0x1c/0x30

Reported-by: Joe Stringer <[email protected]>
Signed-off-by: Pravin B Shelar <[email protected]>
Acked-by: Jesse Gross <[email protected]>
pshelar pushed a commit that referenced this pull request Dec 17, 2015
Following crash was reported for STT tunnel. I am not able to reproduce
it, But the usage of wrong list manipulation API is likely culprit.

---8<---
IP: [<ffffffffc0e731fd>] nf_ip_hook+0xfd/0x180 [openvswitch]
Oops: 0000 [#1] PREEMPT SMP
Hardware name: VMware, Inc. VMware Virtual Platform/440BX
RIP: 0010:[<ffffffffc0e731fd>]  [<ffffffffc0e731fd>] nf_ip_hook+0xfd/0x180 [openvswitch]
RSP: 0018:ffff88043fd03cd0  EFLAGS: 00010206
RAX: 0000000000000000 RBX: ffff8801008e2200 RCX: 0000000000000034
RDX: 0000000000000110 RSI: ffff8801008e2200 RDI: ffff8801533a3880
RBP: ffff88043fd03d00 R08: ffffffff90646d10 R09: ffff880164b27000
R10: 0000000000000003 R11: ffff880155eb9dd8 R12: 0000000000000028
R13: ffff8802283dc580 R14: 00000000000076b4 R15: ffff880013b20000
FS:  00007ff5ba73b700(0000) GS:ffff88043fd00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000020 CR3: 000000037ff96000 CR4: 00000000000007e0
Stack:
 ffff8801533a3890 ffff88043fd03d80 ffffffff90646d10 0000000000000000
 ffff880164b27000 ffff8801008e2200 ffff88043fd03d48 ffffffff9064050a
 ffffffff90d0f930 ffffffffc0e7ef80 0000000000000001 ffff8801008e2200
Call Trace:
 <IRQ>
 [<ffffffff90646d10>] ? ip_rcv_finish+0x350/0x350
 [<ffffffff9064050a>] nf_iterate+0x9a/0xb0
 [<ffffffff90646d10>] ? ip_rcv_finish+0x350/0x350
 [<ffffffff9064059c>] nf_hook_slow+0x7c/0x120
 [<ffffffff90646d10>] ? ip_rcv_finish+0x350/0x350
 [<ffffffff906470f3>] ip_local_deliver+0x73/0x80
 [<ffffffff90646a3d>] ip_rcv_finish+0x7d/0x350
 [<ffffffff90647398>] ip_rcv+0x298/0x3d0
 [<ffffffff9060fc56>] __netif_receive_skb_core+0x696/0x880
 [<ffffffff9060fe58>] __netif_receive_skb+0x18/0x60
 [<ffffffff90610b3e>] process_backlog+0xae/0x180
 [<ffffffff906102c2>] net_rx_action+0x152/0x270
 [<ffffffff9006d625>] __do_softirq+0xf5/0x320
 [<ffffffff9071d15c>] do_softirq_own_stack+0x1c/0x30

Reported-by: Joe Stringer <[email protected]>
Signed-off-by: Pravin B Shelar <[email protected]>
Acked-by: Jesse Gross <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Sep 20, 2023
OVN unit tests highlight this:

ERROR: LeakSanitizer: detected memory leaks
Direct leak of 1344 byte(s) in 1 object(s) allocated from:
    #0 0x4db0b7 in calloc (/root/master-ovn/ovs/ovsdb/ovsdb-server+0x4db0b7)
    openvswitch#1 0x5c2162 in xcalloc__ /root/master-ovn/ovs/lib/util.c:124:31
    openvswitch#2 0x5c221c in xcalloc /root/master-ovn/ovs/lib/util.c:161:12
    openvswitch#3 0x54afbc in ovsdb_condition_diff /root/master-ovn/ovs/ovsdb/condition.c:527:21
    #4 0x529da6 in ovsdb_monitor_table_condition_update /root/master-ovn/ovs/ovsdb/monitor.c:824:5
    #5 0x524fa4 in ovsdb_jsonrpc_parse_monitor_cond_change_request /root/master-ovn/ovs/ovsdb/jsonrpc-server.c:1557:13
    openvswitch#6 0x5235c3 in ovsdb_jsonrpc_monitor_cond_change /root/master-ovn/ovs/ovsdb/jsonrpc-server.c:1624:25
    #7 0x5217f2 in ovsdb_jsonrpc_session_got_request /root/master-ovn/ovs/ovsdb/jsonrpc-server.c:1034:17
    #8 0x520ee6 in ovsdb_jsonrpc_session_run /root/master-ovn/ovs/ovsdb/jsonrpc-server.c:572:17
    #9 0x51ffbe in ovsdb_jsonrpc_session_run_all /root/master-ovn/ovs/ovsdb/jsonrpc-server.c:602:21
    #10 0x51fbcf in ovsdb_jsonrpc_server_run /root/master-ovn/ovs/ovsdb/jsonrpc-server.c:417:9
    #11 0x517550 in main_loop /root/master-ovn/ovs/ovsdb/ovsdb-server.c:224:9
    #12 0x512e80 in main /root/master-ovn/ovs/ovsdb/ovsdb-server.c:507:5
    #13 0x7f9ecf675b74 in __libc_start_main (/lib64/libc.so.6+0x27b74)

Signed-off-by: Xavier Simonart <[email protected]>
Signed-off-by: 0-day Robot <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Nov 6, 2023
This avoids misaligned accesses as flagged by UBSan when running CoPP
system tests:

  controller/pinctrl.c:7129:28: runtime error: member access within misaligned address 0x61b0000627be for type 'const struct bfd_msg', which requires 4 byte alignment
  0x61b0000627be: note: pointer points here
   00 20 d3 d4 20 60  05 18 a8 99 e7 7b 92 1d  36 73 00 0f 42 40 00 0f  42 40 00 00 00 00 00 00  00 03
               ^
      #0 0x621b8f in pinctrl_check_bfd_msg /root/ovn/controller/pinctrl.c:7129:28
      openvswitch#1 0x621b8f in pinctrl_handle_bfd_msg /root/ovn/controller/pinctrl.c:7183:10
      openvswitch#2 0x621b8f in process_packet_in /root/ovn/controller/pinctrl.c:3320:9
      openvswitch#3 0x621b8f in pinctrl_recv /root/ovn/controller/pinctrl.c:3386:9
      #4 0x621b8f in pinctrl_handler /root/ovn/controller/pinctrl.c:3481:17
      #5 0xa2d89a in ovsthread_wrapper /root/ovn/ovs/lib/ovs-thread.c:422:12
      openvswitch#6 0x7fcb598081ce in start_thread (/lib64/libpthread.so.0+0x81ce)
      #7 0x7fcb58439dd2 in clone (/lib64/libc.so.6+0x39dd2)

Fixes: 1172035 ("controller: introduce BFD tx path in ovn-controller.")
Reported-by: Ilya Maximets <[email protected]>
Acked-by: Ales Musil <[email protected]>
Signed-off-by: Dumitru Ceara <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Nov 6, 2023
Nothing is being freed wherever we are calling
ctl_fatal which is fine because the program is
about to shutdown anyway however one of the
leaks was caught by address sanitizer.
Fix most of the leaks that are happening before
call to ctl_fatal.

Direct leak of 64 byte(s) in 1 object(s) allocated from:
    #0 0x568d70 in __interceptor_realloc.part.0 asan_malloc_linux.cpp.o
    openvswitch#1 0xa530a5 in xrealloc__ /workspace/ovn/ovs/lib/util.c:147:9
    openvswitch#2 0xa530a5 in xrealloc /workspace/ovn/ovs/lib/util.c:179:12
    openvswitch#3 0xa530a5 in x2nrealloc /workspace/ovn/ovs/lib/util.c:239:12
    #4 0xa2ee57 in svec_expand /workspace/ovn/ovs/lib/svec.c:92:23
    #5 0xa2ef6e in svec_terminate /workspace/ovn/ovs/lib/svec.c:116:5
    openvswitch#6 0x82c117 in ovs_cmdl_env_parse_all /workspace/ovn/ovs/lib/command-line.c:98:5
    #7 0x5ad70d in ovn_dbctl_main /workspace/ovn/utilities/ovn-dbctl.c:132:20
    #8 0x5b58c7 in main /workspace/ovn/utilities/ovn-nbctl.c:7943:12

Indirect leak of 10 byte(s) in 1 object(s) allocated from:
    #0 0x569c07 in malloc (/workspace/ovn/utilities/ovn-nbctl+0x569c07) (BuildId: 3a287416f70de43f52382f0336980c876f655f90)
    openvswitch#1 0xa52d4c in xmalloc__ /workspace/ovn/ovs/lib/util.c:137:15
    openvswitch#2 0xa52d4c in xmalloc /workspace/ovn/ovs/lib/util.c:172:12
    openvswitch#3 0xa52d4c in xmemdup0 /workspace/ovn/ovs/lib/util.c:193:15
    #4 0xa2e580 in svec_add /workspace/ovn/ovs/lib/svec.c:71:27
    #5 0x82c041 in ovs_cmdl_env_parse_all /workspace/ovn/ovs/lib/command-line.c:91:5
    openvswitch#6 0x5ad70d in ovn_dbctl_main /workspace/ovn/utilities/ovn-dbctl.c:132:20
    #7 0x5b58c7 in main /workspace/ovn/utilities/ovn-nbctl.c:7943:12

Indirect leak of 8 byte(s) in 2 object(s) allocated from:
    #0 0x569c07 in malloc (/workspace/ovn/utilities/ovn-nbctl+0x569c07)
    openvswitch#1 0xa52d4c in xmalloc__ /workspace/ovn/ovs/lib/util.c:137:15
    openvswitch#2 0xa52d4c in xmalloc /workspace/ovn/ovs/lib/util.c:172:12
    openvswitch#3 0xa52d4c in xmemdup0 /workspace/ovn/ovs/lib/util.c:193:15
    #4 0xa2e580 in svec_add /workspace/ovn/ovs/lib/svec.c:71:27
    #5 0x82c0b6 in ovs_cmdl_env_parse_all /workspace/ovn/ovs/lib/command-line.c:96:9
    openvswitch#6 0x5ad70d in ovn_dbctl_main /workspace/ovn/utilities/ovn-dbctl.c:132:20
    #7 0x5b58c7 in main /workspace/ovn/utilities/ovn-nbctl.c:7943:12

Signed-off-by: Ales Musil <[email protected]>
Reviewed-by: Simon Horman <[email protected]>
Signed-off-by: Dumitru Ceara <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Nov 6, 2023
…eletion.

When multiple LSP deletions are handled in incremental processing, if it
hits a LSP that can't be incrementally processed after incrementally
processing some LSP deletions, it falls back to recompute without
destroying the ovn_port objects that are already handled in the handler,
resulting in memory leaks. See example below, which is detected by the
new test case added by this patch when running with address sanitizer.

=======================

Indirect leak of 936 byte(s) in 3 object(s) allocated from:
    #0 0x55bce7 in calloc (/home/hanzhou/src/ovn/_build_as/northd/ovn-northd+0x55bce7)
    openvswitch#1 0x773f4e in xcalloc__ /home/hanzhou/src/ovs/_build/../lib/util.c:121:31
    openvswitch#2 0x773f4e in xzalloc__ /home/hanzhou/src/ovs/_build/../lib/util.c:131:12
    openvswitch#3 0x773f4e in xzalloc /home/hanzhou/src/ovs/_build/../lib/util.c:165:12
    #4 0x60106c in en_northd_run /home/hanzhou/src/ovn/_build_as/../northd/en-northd.c:137:5
    #5 0x61c6a8 in engine_recompute /home/hanzhou/src/ovn/_build_as/../lib/inc-proc-eng.c:415:5
    openvswitch#6 0x61bee0 in engine_compute /home/hanzhou/src/ovn/_build_as/../lib/inc-proc-eng.c:454:17
    #7 0x61bee0 in engine_run_node /home/hanzhou/src/ovn/_build_as/../lib/inc-proc-eng.c:503:14
    #8 0x61bee0 in engine_run /home/hanzhou/src/ovn/_build_as/../lib/inc-proc-eng.c:528:9
    #9 0x605e23 in inc_proc_northd_run /home/hanzhou/src/ovn/_build_as/../northd/inc-proc-northd.c:317:9
    #10 0x5fe43b in main /home/hanzhou/src/ovn/_build_as/../northd/ovn-northd.c:934:33
    #11 0x7f473933c1a1 in __libc_start_main (/lib64/libc.so.6+0x281a1)

Indirect leak of 204 byte(s) in 3 object(s) allocated from:
    #0 0x55bea8 in realloc (/home/hanzhou/src/ovn/_build_as/northd/ovn-northd+0x55bea8)
    openvswitch#1 0x773c7d in xrealloc__ /home/hanzhou/src/ovs/_build/../lib/util.c:147:9
    openvswitch#2 0x773c7d in xrealloc /home/hanzhou/src/ovs/_build/../lib/util.c:179:12
    openvswitch#3 0x614bd4 in extract_addresses /home/hanzhou/src/ovn/_build_as/../lib/ovn-util.c:228:12
    #4 0x614bd4 in extract_lsp_addresses /home/hanzhou/src/ovn/_build_as/../lib/ovn-util.c:243:20
    #5 0x5c8d90 in parse_lsp_addrs /home/hanzhou/src/ovn/_build_as/../northd/northd.c:2468:21
    openvswitch#6 0x5b2ebf in join_logical_ports /home/hanzhou/src/ovn/_build_as/../northd/northd.c:2594:13
    #7 0x5b2ebf in build_ports /home/hanzhou/src/ovn/_build_as/../northd/northd.c:4711:5
    #8 0x5b2ebf in ovnnb_db_run /home/hanzhou/src/ovn/_build_as/../northd/northd.c:17376:5
    #9 0x60106c in en_northd_run /home/hanzhou/src/ovn/_build_as/../northd/en-northd.c:137:5
    #10 0x61c6a8 in engine_recompute /home/hanzhou/src/ovn/_build_as/../lib/inc-proc-eng.c:415:5
    #11 0x61bee0 in engine_compute /home/hanzhou/src/ovn/_build_as/../lib/inc-proc-eng.c:454:17
    #12 0x61bee0 in engine_run_node /home/hanzhou/src/ovn/_build_as/../lib/inc-proc-eng.c:503:14
    #13 0x61bee0 in engine_run /home/hanzhou/src/ovn/_build_as/../lib/inc-proc-eng.c:528:9
    #14 0x605e23 in inc_proc_northd_run /home/hanzhou/src/ovn/_build_as/../northd/inc-proc-northd.c:317:9
    #15 0x5fe43b in main /home/hanzhou/src/ovn/_build_as/../northd/ovn-northd.c:934:33
    openvswitch#16 0x7f473933c1a1 in __libc_start_main (/lib64/libc.so.6+0x281a1)
...

Fixes: b337750 ("northd: Incremental processing of VIF changes in 'northd' node.")
Reported-by: Dumitru Ceara <[email protected]>
Signed-off-by: Han Zhou <[email protected]>
Acked-by: Dumitru Ceara <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Nov 6, 2023
It's specified in RFC 8415.  This also avoids having to free/realloc the
pfd->uuid.data memory.  That part was not correct anyway and was flagged
by ASAN as a memleak:

  Direct leak of 42 byte(s) in 3 object(s) allocated from:
      #0 0x55e5b6354c9e in malloc (/workspace/ovn-tmp/controller/ovn-controller+0x2edc9e) (BuildId: f963f8c756bd5a2207a9b3c922d4362e46bb3162)
      openvswitch#1 0x55e5b671878d in xmalloc__ /workspace/ovn-tmp/ovs/lib/util.c:140:15
      openvswitch#2 0x55e5b671878d in xmalloc /workspace/ovn-tmp/ovs/lib/util.c:175:12
      openvswitch#3 0x55e5b642cebc in pinctrl_parse_dhcpv6_reply /workspace/ovn-tmp/controller/pinctrl.c:997:20
      #4 0x55e5b642cebc in pinctrl_handle_dhcp6_server /workspace/ovn-tmp/controller/pinctrl.c:1040:9
      #5 0x55e5b642cebc in process_packet_in /workspace/ovn-tmp/controller/pinctrl.c:3210:9
      openvswitch#6 0x55e5b642cebc in pinctrl_recv /workspace/ovn-tmp/controller/pinctrl.c:3290:9
      #7 0x55e5b642cebc in pinctrl_handler /workspace/ovn-tmp/controller/pinctrl.c:3385:17
      #8 0x55e5b66ef664 in ovsthread_wrapper /workspace/ovn-tmp/ovs/lib/ovs-thread.c:423:12
      #9 0x7faa30194b42  (/lib/x86_64-linux-gnu/libc.so.6+0x94b42) (BuildId: 69389d485a9793dbe873f0ea2c93e02efaa9aa3d)

Fixes: faa44a0 ("controller: IPv6 Prefix-Delegation: introduce RENEW/REBIND msg support")
Signed-off-by: Ales Musil <[email protected]>
Co-authored-by: Ales Musil <[email protected]>
Signed-off-by: Dumitru Ceara <[email protected]>
Acked-by: Lorenzo Bianconi <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Mar 7, 2024
The new_buffer data pointer is NULL when the size of the cloned
buffer is 0. This is fine as there is no need to allocate space.
However, the cloned buffer header/msg might be the same pointer
as data. This causes undefined behavior by adding 0 to NULL pointer.
Check if the data buffer is not NULL before attempting to apply the
header/msg offset.

This was caught by OVN system test:

lib/ofpbuf.c:203:56: runtime error: applying zero offset to null pointer
    #0 0xa012fc in ofpbuf_clone_with_headroom /workspace/ovn/ovs/lib/ofpbuf.c:203:56
    openvswitch#1 0x635fd4 in put_remote_port_redirect_overlay /workspace/ovn/controller/physical.c:397:40
    openvswitch#2 0x635fd4 in consider_port_binding /workspace/ovn/controller/physical.c:1951:9
    openvswitch#3 0x62e046 in physical_run /workspace/ovn/controller/physical.c:2447:9
    #4 0x601d98 in en_pflow_output_run /workspace/ovn/controller/ovn-controller.c:4690:5
    #5 0x707769 in engine_recompute /workspace/ovn/lib/inc-proc-eng.c:415:5
    openvswitch#6 0x7060eb in engine_compute /workspace/ovn/lib/inc-proc-eng.c:454:17
    #7 0x7060eb in engine_run_node /workspace/ovn/lib/inc-proc-eng.c:503:14
    #8 0x7060eb in engine_run /workspace/ovn/lib/inc-proc-eng.c:528:9
    #9 0x5f9f26 in main /workspace/ovn/controller/ovn-controller.c

Signed-off-by: Ales Musil <[email protected]>
Signed-off-by: 0-day Robot <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request May 2, 2024
The has was passed to memcpy as uin32_t *, however the hash bytes
might be unaligned at that point. Case it to uint8_t * instead
which has only single byte alignment requirement.

lib/hash.c:46:22: runtime error: load of misaligned address 0x507000000065 for type 'const uint32_t *' (aka 'const unsigned int *'), which requires 4 byte alignment
0x507000000065: note: pointer points here
 73 62 2e 73 6f 63 6b  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  00
             ^
    #0 0x6191cb in hash_bytes /workspace/ovn/ovs/lib/hash.c:46:9
    openvswitch#1 0x69d064 in hash_string /workspace/ovn/ovs/lib/hash.h:404:12
    openvswitch#2 0x69d064 in hash_name /workspace/ovn/ovs/lib/shash.c:29:12
    openvswitch#3 0x69d064 in shash_find /workspace/ovn/ovs/lib/shash.c:237:49
    #4 0x69dada in shash_find_data /workspace/ovn/ovs/lib/shash.c:251:31
    #5 0x507987 in add_remote /workspace/ovn/ovs/ovsdb/ovsdb-server.c:1382:15
    openvswitch#6 0x507987 in parse_options /workspace/ovn/ovs/ovsdb/ovsdb-server.c:2659:13
    #7 0x507987 in main /workspace/ovn/ovs/ovsdb/ovsdb-server.c:751:5
    #8 0x7f47e3997087 in __libc_start_call_main (/lib64/libc.so.6+0x2a087) (BuildId: b098f1c75a76548bb230d8f551eae07a2aeccf06)
    #9 0x7f47e399714a in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x2a14a) (BuildId: b098f1c75a76548bb230d8f551eae07a2aeccf06)
    #10 0x42de64 in _start (/workspace/ovn/ovs/ovsdb/ovsdb-server+0x42de64) (BuildId: 6c3f4e311556b29f84c9c4a5d6df5114dc08a12e)

SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior lib/hash.c:46:22

Signed-off-by: Ales Musil <[email protected]>
Signed-off-by: 0-day Robot <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request May 14, 2024
In function shash_replace_nocopy, argument to free() is the address
of a global variable (argument passed by function table_print_json__),
which is not memory allocated by malloc().

ovsdb-client -f json monitor Open_vSwitch --timestamp

ASan reports:
=================================================================
==1443083==ERROR: AddressSanitizer: attempting free on address
which was not malloc()-ed: 0x000000535980 in thread T0
    #0 0xffffaf1c9eac in __interceptor_free (/usr/lib64/libasan.so.6+
0xa4eac)
    openvswitch#1 0x4826e4 in json_destroy_object lib/json.c:445
    openvswitch#2 0x4826e4 in json_destroy__ lib/json.c:403
    openvswitch#3 0x4cc4e4 in table_print lib/table.c:633
    #4 0x410650 in monitor_print_table ovsdb/ovsdb-client.c:1019
    #5 0x410650 in monitor_print ovsdb/ovsdb-client.c:1040
    openvswitch#6 0x4110cc in monitor_print ovsdb/ovsdb-client.c:1030
    #7 0x4110cc in do_monitor__ ovsdb/ovsdb-client.c:1503
    #8 0x40743c in main ovsdb/ovsdb-client.c:283
    #9 0xffffaeb50038  (/usr/lib64/libc.so.6+0x2b038)
    #10 0xffffaeb50110 in __libc_start_main (/usr/lib64/libc.so.6+
0x2b110)
    #11 0x40906c in _start (/usr/local/bin/ovsdb-client+0x40906c)

Use xstrdup to allocate memory for argument "time".

Fixes: cb139fa ("table: New function table_format() for formatting a table as a string.")
Signed-off-by: Pengfei Sun <[email protected]>
Signed-off-by: 0-day Robot <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request May 15, 2024
In function shash_replace_nocopy, argument to free() is the address
of a global variable (argument passed by function table_print_json__),
which is not memory allocated by malloc().

ovsdb-client -f json monitor Open_vSwitch --timestamp

ASan reports:
=================================================================
==1443083==ERROR: AddressSanitizer: attempting free on address
which was not malloc()-ed: 0x000000535980 in thread T0
    #0 0xffffaf1c9eac in __interceptor_free (/usr/lib64/libasan.so.6+
0xa4eac)
    openvswitch#1 0x4826e4 in json_destroy_object lib/json.c:445
    openvswitch#2 0x4826e4 in json_destroy__ lib/json.c:403
    openvswitch#3 0x4cc4e4 in table_print lib/table.c:633
    #4 0x410650 in monitor_print_table ovsdb/ovsdb-client.c:1019
    #5 0x410650 in monitor_print ovsdb/ovsdb-client.c:1040
    openvswitch#6 0x4110cc in monitor_print ovsdb/ovsdb-client.c:1030
    #7 0x4110cc in do_monitor__ ovsdb/ovsdb-client.c:1503
    #8 0x40743c in main ovsdb/ovsdb-client.c:283
    #9 0xffffaeb50038  (/usr/lib64/libc.so.6+0x2b038)
    #10 0xffffaeb50110 in __libc_start_main (/usr/lib64/libc.so.6+
0x2b110)
    #11 0x40906c in _start (/usr/local/bin/ovsdb-client+0x40906c)

Fixes: cb139fa ("table: New function table_format() for formatting a table as a string.")
Signed-off-by: Pengfei Sun <[email protected]>
Signed-off-by: 0-day Robot <[email protected]>
mkp-rh added a commit to mkp-rh/ovs that referenced this pull request Jun 12, 2024
When compiling with '-fsanitize=address,undefined', the "ovs-ofctl
ct-flush" test will yield the following undefined behavior flagged
by UBSan. This patch uses memcpy to move the 128bit value into the
stack before reading it.

lib/ofp-prop.c:277:14: runtime error: load of misaligned address
for type 'union ovs_be128', which requires 8 byte alignment
              ^
    #0 0x7735d4 in ofpprop_parse_u128 lib/ofp-prop.c:277
    openvswitch#1 0x6c6c83 in ofp_ct_match_decode lib/ofp-ct.c:529
    openvswitch#2 0x76f3b5 in ofp_print_nxt_ct_flush lib/ofp-print.c:959
    openvswitch#3 0x76f3b5 in ofp_to_string__ lib/ofp-print.c:1206
    #4 0x76f3b5 in ofp_to_string lib/ofp-print.c:1264
    #5 0x770c0d in ofp_print lib/ofp-print.c:1308
    openvswitch#6 0x484a9d in ofctl_ofp_print utilities/ovs-ofctl.c:4899
    #7 0x4ddb77 in ovs_cmdl_run_command__ lib/command-line.c:247
    #8 0x47f6b3 in main utilities/ovs-ofctl.c:186

Signed-off-by: Mike Pattrick <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Jun 12, 2024
When compiling with '-fsanitize=address,undefined', the "ovs-ofctl
ct-flush" test will yield the following undefined behavior flagged
by UBSan. This patch uses memcpy to move the 128bit value into the
stack before reading it.

lib/ofp-prop.c:277:14: runtime error: load of misaligned address
for type 'union ovs_be128', which requires 8 byte alignment
              ^
    #0 0x7735d4 in ofpprop_parse_u128 lib/ofp-prop.c:277
    openvswitch#1 0x6c6c83 in ofp_ct_match_decode lib/ofp-ct.c:529
    openvswitch#2 0x76f3b5 in ofp_print_nxt_ct_flush lib/ofp-print.c:959
    openvswitch#3 0x76f3b5 in ofp_to_string__ lib/ofp-print.c:1206
    #4 0x76f3b5 in ofp_to_string lib/ofp-print.c:1264
    #5 0x770c0d in ofp_print lib/ofp-print.c:1308
    openvswitch#6 0x484a9d in ofctl_ofp_print utilities/ovs-ofctl.c:4899
    #7 0x4ddb77 in ovs_cmdl_run_command__ lib/command-line.c:247
    #8 0x47f6b3 in main utilities/ovs-ofctl.c:186

Signed-off-by: Mike Pattrick <[email protected]>
Signed-off-by: 0-day Robot <[email protected]>
roseoriorden pushed a commit to roseoriorden/ovs that referenced this pull request Jul 1, 2024
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior in
 lib/dpif-netlink.c:1077:40: runtime error:
   left shift of 1 by 31 places cannot be represented in type 'int'

     #0  0x73fc31 in dpif_netlink_port_add_compat lib/dpif-netlink.c:1077:40
     openvswitch#1  0x73fc31 in dpif_netlink_port_add lib/dpif-netlink.c:1132:17
     openvswitch#2  0x2c1745 in dpif_port_add lib/dpif.c:597:13
     openvswitch#3  0x07b279 in port_add ofproto/ofproto-dpif.c:3957:17
     #4  0x01b209 in ofproto_port_add ofproto/ofproto.c:2124:13
     #5  0xfdbfce in iface_do_create vswitchd/bridge.c:2066:13
     openvswitch#6  0xfdbfce in iface_create vswitchd/bridge.c:2109:13
     #7  0xfdbfce in bridge_add_ports__ vswitchd/bridge.c:1173:21
     #8  0xfb5319 in bridge_add_ports vswitchd/bridge.c:1189:5
     #9  0xfb5319 in bridge_reconfigure vswitchd/bridge.c:901:9
     #10 0xfae0f9 in bridge_run vswitchd/bridge.c:3334:9
     #11 0xfe67dd in main vswitchd/ovs-vswitchd.c:129:9
     #12 0x4b6d8f  (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
     #13 0x4b6e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
     #14 0x562594eed024 in _start (vswitchd/ovs-vswitchd+0x787024)

Fixes: 526df7d ("tunnel: Provide framework for tunnel extensions for VXLAN-GBP and others")
Acked-by: Mike Pattrick <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
roseoriorden pushed a commit to roseoriorden/ovs that referenced this pull request Jul 1, 2024
Fix coverity big parameter passed by value

CID 549858 (openvswitch#1 of 1): Big parameter passed by value (PASS_BY_VALUE)
pass_by_value: Passing parameter metadata of type struct tun_metadata (size 272 bytes) by value,
which exceeds the medium threshold of 256 bytes

Signed-off-by: Roi Dayan <[email protected]>
Signed-off-by: Simon Horman <[email protected]>
roseoriorden pushed a commit to roseoriorden/ovs that referenced this pull request Jul 1, 2024
CID 550702 (openvswitch#1 of 1): Dereference null return value (NULL_RETURNS)
7. dereference: Dereferencing a pointer that might be NULL ex_type when calling nl_attr_get_u16.

Signed-off-by: Roi Dayan <[email protected]>
Signed-off-by: Simon Horman <[email protected]>
roseoriorden pushed a commit to roseoriorden/ovs that referenced this pull request Jul 1, 2024
UB Sanitizer report:

lib/netdev-dummy.c:197:15: runtime error: member access within
misaligned address 0x00000217a7f0 for type 'struct
dummy_packet_stream', which requires 64 byte alignment
              ^
    #0 dummy_packet_stream_init lib/netdev-dummy.c:197
    openvswitch#1 dummy_packet_stream_create lib/netdev-dummy.c:208
    openvswitch#2 dummy_packet_conn_set_config lib/netdev-dummy.c:436
    [...]

Signed-off-by: Mike Pattrick <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
roseoriorden pushed a commit to roseoriorden/ovs that referenced this pull request Jul 1, 2024
UB Sanitizer report:
lib/dp-packet.h:587:22: runtime error: member access within misaligned
address 0x000001ecde10 for type 'struct dp_packet', which requires 64
byte alignment

    #0 in dp_packet_set_base lib/dp-packet.h:587
    openvswitch#1 in dp_packet_use__ lib/dp-packet.c:46
    openvswitch#2 in dp_packet_use lib/dp-packet.c:60
    openvswitch#3 in dp_packet_init lib/dp-packet.c:126
    #4 in dp_packet_new lib/dp-packet.c:150
    [...]

Signed-off-by: Mike Pattrick <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
roseoriorden pushed a commit to roseoriorden/ovs that referenced this pull request Jul 1, 2024
UB Sanitizer report:

lib/hash.h:219:17: runtime error: load of misaligned address
0x7ffc164a88b4 for type 'const uint64_t', which requires 8 byte
alignment

    #0 in hash_words_inline lib/hash.h:219
    openvswitch#1 in hash_words lib/hash.h:297
    [...]

Signed-off-by: Mike Pattrick <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
igsilya added a commit to igsilya/ovs that referenced this pull request Jul 16, 2024
'wc' can't be NULL there and if it can we'd already crash a few lines
before setting up vlan flags.

The check is misleading as it makes people to assume that wc can be
NULL.  And it makes Coverity think the same:

  CID 1596572: (openvswitch#1 of 1): Dereference after null check (FORWARD_NULL)
  25. var_deref_op: Dereferencing null pointer ctx->wc.

  14. var_compare_op: Comparing ctx->wc to null implies that ctx->wc
      might be null

Remove the check.

Fixes: 3b18822 ("ofproto-dpif-mirror: Add support for pre-selection filter.")
Signed-off-by: Ilya Maximets <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Jul 16, 2024
'wc' can't be NULL there and if it can we'd already crash a few lines
before setting up vlan flags.

The check is misleading as it makes people to assume that wc can be
NULL.  And it makes Coverity think the same:

  CID 1596572: (openvswitch#1 of 1): Dereference after null check (FORWARD_NULL)
  25. var_deref_op: Dereferencing null pointer ctx->wc.

  14. var_compare_op: Comparing ctx->wc to null implies that ctx->wc
      might be null

Remove the check.

Fixes: 3b18822 ("ofproto-dpif-mirror: Add support for pre-selection filter.")
Signed-off-by: Ilya Maximets <[email protected]>
Signed-off-by: 0-day Robot <[email protected]>
igsilya added a commit to igsilya/ovs that referenced this pull request Jul 16, 2024
'wc' can't be NULL there and if it can we'd already crash a few lines
before setting up vlan flags.

The check is misleading as it makes people to assume that wc can be
NULL.  And it makes Coverity think the same:

  CID 1596572: (openvswitch#1 of 1): Dereference after null check (FORWARD_NULL)
  25. var_deref_op: Dereferencing null pointer ctx->wc.

  14. var_compare_op: Comparing ctx->wc to null implies that ctx->wc
      might be null

Remove the check.

Fixes: 3b18822 ("ofproto-dpif-mirror: Add support for pre-selection filter.")
Acked-by: Mike Pattrick <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
igsilya added a commit to igsilya/ovs that referenced this pull request Jul 16, 2024
'wc' can't be NULL there and if it can we'd already crash a few lines
before setting up vlan flags.

The check is misleading as it makes people to assume that wc can be
NULL.  And it makes Coverity think the same:

  CID 1596572: (openvswitch#1 of 1): Dereference after null check (FORWARD_NULL)
  25. var_deref_op: Dereferencing null pointer ctx->wc.

  14. var_compare_op: Comparing ctx->wc to null implies that ctx->wc
      might be null

Remove the check.

Fixes: 3b18822 ("ofproto-dpif-mirror: Add support for pre-selection filter.")
Acked-by: Mike Pattrick <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
igsilya added a commit to igsilya/ovs that referenced this pull request Jul 17, 2024
'wc' can't be NULL there and if it can we'd already crash a few lines
before setting up vlan flags.

The check is misleading as it makes people to assume that wc can be
NULL.  And it makes Coverity think the same:

  CID 1596572: (openvswitch#1 of 1): Dereference after null check (FORWARD_NULL)
  25. var_deref_op: Dereferencing null pointer ctx->wc.

  14. var_compare_op: Comparing ctx->wc to null implies that ctx->wc
      might be null

Remove the check.

Fixes: 3b18822 ("ofproto-dpif-mirror: Add support for pre-selection filter.")
Acked-by: Mike Pattrick <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
igsilya added a commit to igsilya/ovs that referenced this pull request Jul 17, 2024
'wc' can't be NULL there and if it can we'd already crash a few lines
before setting up vlan flags.

The check is misleading as it makes people to assume that wc can be
NULL.  And it makes Coverity think the same:

  CID 1596572: (openvswitch#1 of 1): Dereference after null check (FORWARD_NULL)
  25. var_deref_op: Dereferencing null pointer ctx->wc.

  14. var_compare_op: Comparing ctx->wc to null implies that ctx->wc
      might be null

Remove the check.

Fixes: 3b18822 ("ofproto-dpif-mirror: Add support for pre-selection filter.")
Acked-by: Mike Pattrick <[email protected]>
Signed-off-by: Ilya Maximets <[email protected]>
mkp-rh added a commit to mkp-rh/ovs that referenced this pull request Aug 27, 2024
Coverity identified the following issue

CID 425094: (openvswitch#1 of 1): Unchecked return value (CHECKED_RETURN)
4. check_return: Calling dp_packet_hwol_tx_ip_csum without checking
return value (as is done elsewhere 9 out of 11 times).

This appears to be a true positive, the fields getter was called instead
of its setter.

Fixes: 084c808 ("userspace: Support VXLAN and GENEVE TSO.")
Reported-by: Eelco Chaudron <[email protected]>
Signed-off-by: Mike Pattrick <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Aug 27, 2024
Coverity identified the following issue

CID 425094: (openvswitch#1 of 1): Unchecked return value (CHECKED_RETURN)
4. check_return: Calling dp_packet_hwol_tx_ip_csum without checking
return value (as is done elsewhere 9 out of 11 times).

This appears to be a true positive, the fields getter was called instead
of its setter.

Fixes: 084c808 ("userspace: Support VXLAN and GENEVE TSO.")
Reported-by: Eelco Chaudron <[email protected]>
Signed-off-by: Mike Pattrick <[email protected]>
Signed-off-by: 0-day Robot <[email protected]>
chaudron pushed a commit that referenced this pull request Aug 29, 2024
Coverity identified the following issue

CID 425094: (#1 of 1): Unchecked return value (CHECKED_RETURN)
4. check_return: Calling dp_packet_hwol_tx_ip_csum without checking
return value (as is done elsewhere 9 out of 11 times).

This appears to be a true positive, the fields getter was called instead
of its setter.

Fixes: 084c808 ("userspace: Support VXLAN and GENEVE TSO.")
Reported-by: Eelco Chaudron <[email protected]>
Acked-by: Simon Horman <[email protected]>
Signed-off-by: Mike Pattrick <[email protected]>
Signed-off-by: Eelco Chaudron <[email protected]>
chaudron pushed a commit that referenced this pull request Aug 29, 2024
Coverity identified the following issue

CID 425094: (#1 of 1): Unchecked return value (CHECKED_RETURN)
4. check_return: Calling dp_packet_hwol_tx_ip_csum without checking
return value (as is done elsewhere 9 out of 11 times).

This appears to be a true positive, the fields getter was called instead
of its setter.

Fixes: 084c808 ("userspace: Support VXLAN and GENEVE TSO.")
Reported-by: Eelco Chaudron <[email protected]>
Acked-by: Simon Horman <[email protected]>
Signed-off-by: Mike Pattrick <[email protected]>
Signed-off-by: Eelco Chaudron <[email protected]>
chaudron pushed a commit that referenced this pull request Aug 29, 2024
Coverity identified the following issue

CID 425094: (#1 of 1): Unchecked return value (CHECKED_RETURN)
4. check_return: Calling dp_packet_hwol_tx_ip_csum without checking
return value (as is done elsewhere 9 out of 11 times).

This appears to be a true positive, the fields getter was called instead
of its setter.

Fixes: 084c808 ("userspace: Support VXLAN and GENEVE TSO.")
Reported-by: Eelco Chaudron <[email protected]>
Acked-by: Simon Horman <[email protected]>
Signed-off-by: Mike Pattrick <[email protected]>
Signed-off-by: Eelco Chaudron <[email protected]>
mkp-rh added a commit to mkp-rh/ovs that referenced this pull request Jan 7, 2025
Calls to ovs_fatal from check_ovsdb_error would trigger an asan splat in
the ci. This is a harmless memory leak as the process would always
immediately terminate anyways.

==245041==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 32 byte(s) in 1 object(s) allocated from:
    #0 in malloc
    openvswitch#1 in xmalloc__ lib/util.c:141:15
    openvswitch#2 in xmalloc lib/util.c:176:12
    openvswitch#3 in ovsdb_error_valist lib/ovsdb-error.c:40:33
    #4 in ovsdb_error lib/ovsdb-error.c:55:13
    #5 in ovsdb_log_open ovsdb/log.c
    openvswitch#6 in do_db_name ovsdb/ovsdb-tool.c:482:23
    #7 in ovs_cmdl_run_command__ lib/command-line.c:247:17
    #8 in main ovsdb/ovsdb-tool.c:82:5

Signed-off-by: Mike Pattrick <[email protected]>
ovsrobot pushed a commit to ovsrobot/ovs that referenced this pull request Jan 7, 2025
Calls to ovs_fatal from check_ovsdb_error would trigger an asan splat in
the ci. This is a harmless memory leak as the process would always
immediately terminate anyways.

==245041==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 32 byte(s) in 1 object(s) allocated from:
    #0 in malloc
    openvswitch#1 in xmalloc__ lib/util.c:141:15
    openvswitch#2 in xmalloc lib/util.c:176:12
    openvswitch#3 in ovsdb_error_valist lib/ovsdb-error.c:40:33
    #4 in ovsdb_error lib/ovsdb-error.c:55:13
    #5 in ovsdb_log_open ovsdb/log.c
    openvswitch#6 in do_db_name ovsdb/ovsdb-tool.c:482:23
    #7 in ovs_cmdl_run_command__ lib/command-line.c:247:17
    #8 in main ovsdb/ovsdb-tool.c:82:5

Signed-off-by: Mike Pattrick <[email protected]>
Signed-off-by: 0-day Robot <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants