Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add .gitlab-ci.yml in order to use ci/cd in gitlab #2

Closed

Conversation

dansl19
Copy link

@dansl19 dansl19 commented Jun 11, 2018

Add .gitlab-ci.yml in order to use CI/CD pipeline in GitLab.
Here we have two parts:
Github repo: Code management.
GitLab repo: CI/CD pipeline for code which is managed on Github.
Plan is continue to use Github for code management and GitLab for CI/CD pipeline.
GitLab repo : https://gitlab.com/travelping/vpp-github

@dansl19 dansl19 closed this Jun 11, 2018
@dansl19 dansl19 deleted the feature/deployment_vpp_gitlab branch June 11, 2018 10:09
fdio-github pushed a commit that referenced this pull request Jun 25, 2018
Change-Id: I9d5b5d925fd2c09a1113fc51e433a16d729a241b
Signed-off-by: Klement Sekera <[email protected]>
artem-belov pushed a commit to artem-belov/vpp that referenced this pull request Apr 12, 2019
README: Add a link to the slides
artem-belov pushed a commit to artem-belov/vpp that referenced this pull request Apr 12, 2019
* Client/Server plugin and Discovery

Signed-off-by: Serguei Bezverkhi <[email protected]>

* Adding missing folder

Signed-off-by: Serguei Bezverkhi <[email protected]>

* Fixing typo

Signed-off-by: Serguei Bezverkhi <[email protected]>

* Move wait command

Signed-off-by: Serguei Bezverkhi <[email protected]>

* Minor cosmetic change in message

Signed-off-by: Serguei Bezverkhi <[email protected]>

* Minor cosmetic change in message FDio#2

Signed-off-by: Serguei Bezverkhi <[email protected]>

* Addressing comments

Signed-off-by: Serguei Bezverkhi <[email protected]>

* Addressing comments FDio#2

Signed-off-by: Serguei Bezverkhi <[email protected]>

* Addressing CI failure

Signed-off-by: Serguei Bezverkhi <[email protected]>
fdio-github pushed a commit that referenced this pull request Mar 19, 2021
This issue happens if:
- the API client connects via Unix socket
- the client issues the *_dump API call and immediately disconnects

What happens after is that the API handler keeps sending the *_details
messages, however at some point the write fails, and the socket is
deleted.

The attempt of a use of the registration pointer results in interpreting
the socket as a shared memory socket. This results in a crash, because
the data in this structure then does not make sense, like the below:

|
|Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
|__GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:67
|67      ../nptl/pthread_mutex_lock.c: No such file or directory.
|(gdb) bt
|#0  __GI___pthread_mutex_lock (mutex=0x0) at ../nptl/pthread_mutex_lock.c:67
|#1  0x00007ffff500f957 in svm_queue_lock (q=0x0) at /home/ubuntu/vpp/src/svm/queue.c:101
|#2  svm_queue_add (q=0x0, elem=0x7fffa76c2de0 "\210\365\006\060\001", nowait=0) at /home/ubuntu/vpp/src/svm/queue.c:274
|#3  0x00007ffff6e131e3 in vl_api_send_msg (rp=<optimized out>, elem=<optimized out>) at /home/ubuntu/vpp/src/vlibmemory/api.h:43
|#4  send_sw_interface_details (am=<optimized out>, rp=<optimized out>, swif=0x7fffb957a0bc, interface_name=<optimized out>, context=<optimized out>)
|    at /home/ubuntu/vpp/src/vnet/interface_api.c:353
|#5  0x00007ffff6e0edeb in vl_api_sw_interface_dump_t_handler (mp=<optimized out>) at /home/ubuntu/vpp/src/vnet/interface_api.c:412
|#6  0x00007ffff7daeb48 in msg_handler_internal (am=<optimized out>, the_msg=0x7fffb839a5e0, trace_it=<optimized out>, do_it=1, free_it=0)
|    at /home/ubuntu/vpp/src/vlibapi/api_shared.c:501
|#7  vl_msg_api_socket_handler (the_msg=0x7fffb839a5e0) at /home/ubuntu/vpp/src/vlibapi/api_shared.c:790
|#8  0x00007ffff7d7c608 in vl_socket_process_api_msg (rp=<optimized out>, input_v=0x7fffa76c2de0 "\210\365\006\060\001") at /home/ubuntu/vpp/src/vlibmemory/socket_api.c:212
|#9  0x00007ffff7d89ff1 in vl_api_clnt_process (vm=<optimized out>, node=<optimized out>, f=<optimized out>) at /home/ubuntu/vpp/src/vlibmemory/vlib_api.c:405
|#10 0x00007ffff53bf9a7 in vlib_process_bootstrap (_a=<optimized out>) at /home/ubuntu/vpp/src/vlib/main.c:1490
|#11 0x00007ffff4da0b2c in clib_calljmp () from /home/ayourtch/vpp/build-root/install-vpp-native/vpp/lib/libvppinfra.so.21.06
|#12 0x00007fffa99a4d90 in ?? ()
|#13 0x00007ffff53b6cb2 in vlib_process_startup (vm=0x7ffff56a9880 <vlib_global_main>, p=0x7fffb5d41380, f=0x0) at /home/ubuntu/vpp/src/vlib/main.c:1515
|#14 dispatch_process (vm=0x7ffff56a9880 <vlib_global_main>, p=0x7fffb5d41380, f=0x0, last_time_stamp=<optimized out>) at /home/ubuntu/vpp/src/vlib/main.c:1571
|#15 0x0000000000000000 in ?? ()
|(gdb) frame 3
|#3  0x00007ffff6e131e3 in vl_api_send_msg (rp=<optimized out>, elem=<optimized out>) at /home/ubuntu/vpp/src/vlibmemory/api.h:43
|43            vl_msg_api_send_shmem (rp->vl_input_queue, (u8 *) & elem);
|(gdb) l
|38          {
|39            vl_socket_api_send (rp, elem);
|40          }
|41        else
|42          {
|43            vl_msg_api_send_shmem (rp->vl_input_queue, (u8 *) & elem);
|44          }
|45      }
|46
|47      always_inline int
|(gdb)
|

The approach in this change is to avoid the closing operations "here and
now", but instead mark the the registration as a zombie and place
a forced RPC towards a callback that does the actual cleanup work.

Forced RPC is handled via the API processing loop with barrier sync,
so we are guaranteed not to have any API processing in-process.

Type: fix
Change-Id: I1972d42da620bdb4fd773c83262863c2781d9005
Signed-off-by: Andrew Yourtchenko <[email protected]>
fdio-github pushed a commit that referenced this pull request Jun 24, 2021
==283032==AddressSanitizer CHECK failed: compiler-rt/lib/asan/asan_mapping.h:366
          "((AddrIsInMem(p))) != (0)" (0x0, 0x0)
    #0 0x49c128 in __asan::AsanCheckFailed
    #1 0x4ae8dc in __sanitizer::CheckFailed
    #2 0x495dec in __asan::ShadowSegmentEndpoint::ShadowSegmentEndpoint
    #3 0x495e48 in __asan_unpoison_memory_region
    #4 0xfffff4e851f8 in svm_map_region /home/vpp/src/svm/svm.c:611:7
    #5 0xfffff4e86d9c in svm_region_init_internal /home/vpp/src/svm/svm.c:797:8
    #6 0xfffff4e87ce4 in svm_region_init_args /home/vpp/src/svm/svm.c:880:3
    #7 0xfffff7f30d30 in vlibmemory_init /home/vpp/src/vlibmemory/memory_api.c:974:3
    #8 0xfffff4fd5368 in vlib_main /home/vpp/src/vlib/main.c:1986:16

svm_global_region_base_va 0x200000000000 is not in the aarch64 mapping range,
leading check failure and vpp cannot start.

aarch64 asan mapping
|| `[0x201000000000, 0xffffffffffff]` || HighMem    ||
|| `[0x041200000000, 0x200fffffffff]` || HighShadow ||
|| `[0x001200000000, 0x0411ffffffff]` || ShadowGap  ||
|| `[0x001000000000, 0x0011ffffffff]` || LowShadow  ||
|| `[0x000000000000, 0x000fffffffff]` || LowMem     ||

x86 asan mapping
|| `[0x10007fff8000, 0x7fffffffffff]` || HighMem    ||
|| `[0x02008fff7000, 0x10007fff7fff]` || HighShadow ||
|| `[0x00008fff7000, 0x02008fff6fff]` || ShadowGap  ||
|| `[0x00007fff8000, 0x00008fff6fff]` || LowShadow  ||
|| `[0x000000000000, 0x00007fff7fff]` || LowMem     ||

Type: fix

Signed-off-by: Tianyu Li <[email protected]>
Change-Id: I55ddbdcd361d66d4cfaf6459b2fa20fd8b64af37
fjccc referenced this pull request Feb 6, 2024
Type: fix

Signed-off-by: Florin Coras <[email protected]>
Change-Id: I3d44ff851da00573343e15712284af3b9c3912e3
fdio-github pushed a commit that referenced this pull request Feb 19, 2024
For the openssl crypto engine based cipher encrypt/decrypt and HMAC IPSec
use cases, the openssl API calls of doing ctx init and key expansion are
moved to initialization stage.

In current implementation , the ctx is initialized with "key" and "iv" in
EVP_EncryptInit_ex (ctx, 0, 0, key->data, op->iv)
in data plane, while the ctx can be initialized with 'key' and 'iv' separately,
which means there could be two API calls:
 1. EVP_EncryptInit_ex (ctx, 0, 0, key->data, 0)
 2. EVP_EncryptInit_ex (ctx, 0, 0, 0, op->iv)

As the 'key' for certain IPSec SA is fixed and known, so call #1 can
be placed in IPSec SA initialization stage.
While call #2 should be kept in data plane for each packet, as the "iv"
is random for each packet.

Type: feature
Signed-off-by: Lijian Zhang <[email protected]>
Change-Id: Ided4462c1d4a38addc3078b03d618209e040a07a
fjccc referenced this pull request Jul 3, 2024
Change-Id: I233d02a669b6a0504cd54590c6c8e4fefadc4713
Signed-off-by: Florin Coras <[email protected]>
fjccc referenced this pull request Jul 18, 2024
Change-Id: I3c58367eec2243fe19b75be78a175c5261863e9e
Signed-off-by: Florin Coras <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant