Skip to content
This repository has been archived by the owner on Oct 15, 2024. It is now read-only.

0.9.6 release #3824

Closed
14 tasks
markus2330 opened this issue May 2, 2021 · 33 comments · Fixed by #3883
Closed
14 tasks

0.9.6 release #3824

markus2330 opened this issue May 2, 2021 · 33 comments · Fixed by #3883
Milestone

Comments

@markus2330
Copy link
Contributor

Here is a copy from doc/todo/RELEASE.md, maybe we should add more [ ] so that more can be ticked?

Release Procedure Documentation

This document describes what to do for a release.

Manual Testing

  • fuzz checker
  • memleaks
  • ASAN

If no release critical problems are found during testing, continue.

Tasks before Release

Updates

  • Update doc/COMPILE.md to reflect actually tested setups
  • Run scripts/dev/update-infos-status with arguments for heuristics (--auto) and check
    transitions in plugin status: especially from/to experimental
  • Write release notes (without hash sums, guid or proper download links)
  • cp doc/todo/NEWS.md doc/news/_preparation_next_release.md

Increment Version Number

  • CMakeLists.txt
  • Increment version in output of doc/tutorials/hello-elektra.md
  • Increment of libelektra in examples/external/java/read-keys-example/pom.xml and src/bindings/jna/README.md
  • Increment the Elektra version in scripts/docker/alpine/*/release.Dockerfile
  • Change VERSION variable in build-server:
    • Go to https://build.libelektra.org
    • Select "Manage Jenkins" -> "Configure System"
    • Scroll down until "Global Properties" and change the variable VERSION

Check

When Source Code is considered ready

  • Trigger release pipeline with parameters:
    The revision parameters should be set to 1 (default).
    On a package revision release (version does not change, only package revision) the revision parameters need to be incremented.
  • Download artifacts after release build is finished and pipeline waits for input:
    To download the artifacts either open the "Artifacts" tab on Blue Ocean or open this URL
    https://build.libelektra.org/job/libelektra-release/<BUILD_NUMBER>/artifact/artifacts/
    where <BUILD_NUMBER> is the number of the current pipeline job which can be found at https://build.libelektra.org/job/libelektra-release
    in the "Build-History" (bottom-left) beginning with "#"
  • Inspect the logs from the artifacts
    For each distribution two artifact folder exist: "<DISTRO_CODENAME>" and "<DISTRO_CODENAME>-installed"
    "<DISTRO_CODENAME>-installed" contains the test and strace logs of Elektra installed through the built packages
    For <DISTRO_CODENAME> artifact:
    • Error logs (*.error) must be empty
    • Check in the (test-*) folders if the passed tests really were successful, i.e.:
      • do not contain error output
      • were not skipped
    • Check if version is correct in the version file
    • Check if the package build log has warnings (<DISTRO_CODENAME>/elektra_*.build and <DISTRO_CODENAME>/elektra_*.error)
    • Check if all plugins/bindings/tools that are part of a package were included (<DISTRO_CODENAME>/elektra_*.build)
      For <DISTRO_CODENAME>-installed artifact:
    • Check in the (test-*) folders if the passed tests really were successful, i.e.:
      • do not contain error output
      • were not skipped
  • Inspect the changes to the libelektra git repository in git/master.log
    • Check diff of commit "release: regenerate plugins overview picture" and if
      doc/images/plugins.pdf was generated correctly
    • Check diff of commit "release: update plugin info status": Plugin tags should be sorted (optional commit)
    • Check diff of commit "release: update debian/changelog": Changelog should have a new entry for this release
    • Check diff of commit "release: update fedora/changelog": Changelog should have a new entry for this release

Publish

  • Publish packages and artifacts from the pipeline:
    • Go to the classic Jenkins UI at https://build.libelektra.org/job/libelektra-release
      (BlueOcean sometimes doesn't show the Input step on long pipelines)
    • Select "Paused for Input" on the side menu
    • On the prompt "Publish Release?" select "Yes" to publish the artifacts
  • Verify that debs.libelektra.org and rpms.libelektra.org contain the released packages.
  • Complete release notes:
    • Move release notes to final name, add links where to download release
    • Clean the git statistics in git/statistics and add them to the release notes (formatting, remove uninteresting data)
    • Add download links to release notes (api-docu, release tarball)
    • Add hash sums to release notes
  • Run linkchecker to verify that all download links of the current release are working:
    mkdir build && cd build && make html && ../scripts/link-checker external-links.txt
  • Wait for master build to finish and verify website is correct

Preperation for next release

  • Increment CMPVERSION in scripts/build/run_icheck
  • Cleanup tests/icheck.suppression (and add info to release notes)

Announce

send out release mails&post on social media (see GitHub issue #676)

Special Requirements

  • Every first release of the year
    • Update copyright in LICENSE.md
@markus2330 markus2330 added this to the 0.9.6 milestone May 2, 2021
@markus2330
Copy link
Contributor Author

Possible reasons for release:

  • (important) fix in JNI
  • FUSE by @dev2718

@mpranj
Copy link
Member

mpranj commented May 2, 2021

More reasons:

  • GCC 11 compatibility
  • Clang 12 compatibility
  • Fedora 34 stable packages
  • Many docu improvements & upgrading to Doxygen 1.9.x & regenerating man pages with Ronn-NG
  • Finish "move kdb header" && upgrading formatting scripts/tools

@mpranj
Copy link
Member

mpranj commented May 16, 2021

I would suggest we postpone the release a few days such that we get the important JNI fixes etc.

@markus2330
Copy link
Contributor Author

Yes, seems like it is needed.

https://github.com/ElektraInitiative/libelektra/milestone/27 has many open points, also many where you are assigned?

Also an PR is tagged with 0.9.6: #3813, which imho should not go to the next release (too risky to merge something like this last minute, better to merge it shortly after the release), the other tagged PRs should go to 0.9.6 though.

@markus2330
Copy link
Contributor Author

As discussed with @kodebach we would temporarily fix #3868 with ENABLE_AUTO_NATIVE_REF_CLEANUP and release this week. In July we clean up this in #3693.

@tucek what is the state of #3863?

@robaerd what is the state of #3861?

@mpranj do you have time to release this week?

Would be great if we could do this release asap!

@mpranj
Copy link
Member

mpranj commented Jun 2, 2021

do you have time to release this week?

I have time for a release this week. I just thought the mentioned PRs need to be in the release, so I didn't release earlier.

@tucek
Copy link
Contributor

tucek commented Jun 2, 2021

@tucek what is the state of #3863?

Currently only plugin specific updates are missing (tutorial, example plugin rework, ...)
I would like to see the rest of the PR in this release. There are several interesting improvements for the Java binding already ready to be merged. Therefore i propose moving plugin related improvements to a new PR. If you concur, today I would ready the PR for being merged tomorrow.

As discussed with @kodebach we would temporarily fix #3868 with ENABLE_AUTO_NATIVE_REF_CLEANUP and release this week. In July we clean up this in #3693.

Just to clarify:
Currently ENABLE_AUTO_NATIVE_REF_CLEANUP = false does two things:

  1. disable automated release for Key and KeySet via Cleaner triggered by garbage collection
  2. disables increasing a key's ref cnt when a Java Key representation is created or before keyDel is called in Key::release

Manual release still calls keyDel and ksDel respectively.

One other thing i would like to clarify independently of ENABLE_AUTO_NATIVE_REF_CLEANUP:
Some functions like Key::getMeta, which calls keyGetMeta state: "You must not delete or change the returned key, use keySetMeta() if you want to delete or change it."
Wouldn't it be ok to still call keyDel for such returned keys, because since they are in the meta key set of the key, their ref counter is higher than zero and keyDel would just do nothing?
And are this keys already read-only and cannot be modified, or should the JNA binding protect such keys against protection using appropriate types?

@kodebach
Copy link
Member

kodebach commented Jun 2, 2021

Currently ENABLE_AUTO_NATIVE_REF_CLEANUP = false does two things: [...]

Yes, that is exactly what we need. All automatic cleanup tasks should be disabled, so we don't get double frees out of nowhere.

Manual release still calls keyDel and ksDel respectively.

Yes, this is fine. If there is a manual release that causes a double-free, we can just remove it. The problem is just that we currently can't correctly prevent automatic double-frees.

Wouldn't it be ok to still call keyDel for such returned keys, because since they are in the meta key set of the key, their ref counter is higher than zero and keyDel would just do nothing?

Yes, with the current implementation it would probably be fine to just call keyDel. However, calling keyDecRef and then keyDel could cause problems. The note is more about the fact, that keyGetMeta returns the exact Key * stored in the KeySet, so modifications could cause various problems.

And are this keys already read-only and cannot be modified, or should the JNA binding protect such keys against protection using appropriate types?

There is already some protection, but there are plans for better protection mechanisms as well. Additional protection on the Java side wouldn't hurt regardless, especially if you can improve the error messages, error handling or the API in general. For example, you could take a hint from the Rust binding and use separate ReadableKey and WritableKey interfaces. Then you don't even have to perform any checks, Key::getMeta could just return a ReadableKey.

However, I think this is a discussion for the future and not relevant to the 0.9.6 Release.

@tucek
Copy link
Contributor

tucek commented Jun 2, 2021

And are this keys already read-only and cannot be modified, or should the JNA binding protect such keys against protection using appropriate types?

There is already some protection, but there are plans for better protection mechanisms as well. Additional protection on the Java side wouldn't hurt regardless, especially if you can improve the error messages, error handling or the API in general. For example, you could take a hint from the Rust binding and use separate ReadableKey and WritableKey interfaces. Then you don't even have to perform any checks, Key::getMeta could just return a ReadableKey.

However, I think this is a discussion for the future and not relevant to the 0.9.6 Release.

Of course in Java it would have to be implemented in a very similar fashion.

@tucek
Copy link
Contributor

tucek commented Jun 2, 2021

Currently ENABLE_AUTO_NATIVE_REF_CLEANUP = false does two things: [...]

Yes, that is exactly what we need. All automatic cleanup tasks should be disabled, so we don't get double frees out of nowhere.

Manual release still calls keyDel and ksDel respectively.

Yes, this is fine. If there is a manual release that causes a double-free, we can just remove it. The problem is just that we currently can't correctly prevent automatic double-frees.

But this actually breaks manual release:

var key = Key.create("user:/tests/someKey");
var keySet = KeySet.create(key);

// clean-up
keySet.release();
key.release(); // double free because keySet.release() already freed the backing key because ref cnt was not increased when Key was created

The only viable temp solution i see is that ENABLE_AUTO_NATIVE_REF_CLEANUP = false also disable calling keyDel for Key::release

@kodebach
Copy link
Member

kodebach commented Jun 2, 2021

But this actually breaks manual release: [...]

This example actually works as intended. The C equivalent would be:

Key * key = keyNew ("user:/tests/someKey", KEY_END);
KeySet * keySet = ksNew (1, key, KS_END);

ksDel (keySet);
keyDel (key); // double free

This behave exactly like the Java version. When you pass a Key * to ksNew, the KeySet takes ownership and you are not supposed to call keyDel on that Key * anymore. If you do need to key longer than the KeySet *, you must use keyIncRef manually.

While this may seem odd or even wrong at first, it is needed to allow code like this:

KeySet * ks = ksNew (1, keyNew ("user:/foo", KEY_END), keyNew ("user:/bar", KEY_END), KS_END);

If ksNew would not take ownership of Key *, you would need to call keyDel manually on all the Key * you pass and that makes calling keyNew inline in ksNew impossible.

There may be other examples, where manual release is actually broken because of ENABLE_AUTO_NATIVE_REF_CLEANUP = false. But it is still fixable, by temporarily removing or commenting out the offending release() call. The big problem is just that you cannot prevent the GC from calling release(), if there is a double free situation.

@robaerd
Copy link
Member

robaerd commented Jun 2, 2021

@robaerd what is the state of #3861?

Should be finished by tomorrow.

@tucek
Copy link
Contributor

tucek commented Jun 2, 2021

But this actually breaks manual release: [...]

This example actually works as intended. The C equivalent would be:

Key * key = keyNew ("user:/tests/someKey", KEY_END);
KeySet * keySet = ksNew (1, key, KS_END);

ksDel (keySet);
keyDel (key); // double free

This behave exactly like the Java version. When you pass a Key * to ksNew, the KeySet takes ownership and you are not supposed to call keyDel on that Key * anymore. If you do need to key longer than the KeySet *, you must use keyIncRef manually.

While this may seem odd or even wrong at first, it is needed to allow code like this:

KeySet * ks = ksNew (1, keyNew ("user:/foo", KEY_END), keyNew ("user:/bar", KEY_END), KS_END);

If ksNew would not take ownership of Key *, you would need to call keyDel manually on all the Key * you pass and that makes calling keyNew inline in ksNew impossible.

There may be other examples, where manual release is actually broken because of ENABLE_AUTO_NATIVE_REF_CLEANUP = false. But it is still fixable, by temporarily removing or commenting out the offending release() call. The big problem is just that you cannot prevent the GC from calling release(), if there is a double free situation.

Since currently the Java binding increases a key's ref count, when a Key representing it is created, either the user or the GC will have to release it (keyDecRef && keyDel). The following is currently the correct API usage (see #3863)

var keySet = KeySet.create(Key.create("user:/foo"), Key.create("user:/bar")); // GC will clean up
var key1 = Key.create("user:/foo");
var key2 = Key.create("user:/bar");
var keySet = KeySet.create(key1, key2);

// optional manual clean up
key1.release();
key2.release();
keySet.release();

If we want to introduce (for the Java binding) a concept of transferring ownership to a KeySet, when a Key is added, we would have to remove incrementing the key's reference counter when a Java Key representation is created. This will introduce the burden for the user of the Java API to correctly releasing key's not added to a key set.

Bottom line should be: A user of the Java API should not at all have to think about releasing key's or key sets, except for optimization purposes.

@tucek
Copy link
Contributor

tucek commented Jun 2, 2021

@tucek what is the state of #3863?

#3863 is rdy for review

@kodebach
Copy link
Member

kodebach commented Jun 3, 2021

Bottom line should be: A user of the Java API should not at all have to think about releasing key's or key sets, except for optimization purposes.

I agree, but the goal for this release is simply to have no segfaults. The easiest way to achieve this is simply to accept memory leaks and not actually do cleanup.

After the release, the goal for the Java API should be:

  1. Even if there are zero explicit release() calls in the Java code, there will be no memory leaks.
  2. You can call release() as many times as you want. However, calling any other function after calling release() on the same object, will always throw an exception.
  3. Key and KeySet should implement Closeable, so that try-with-resources can be used.

@markus2330
Copy link
Contributor Author

Yes, I agree with @kodebach, let us release something where JNI plugins can be used without segfaults asap (deadline for students is in one week), and fix everything to have no mem-leaks in July.

It is probably a good idea to release #3863, so that @tucek can get some feedback of how people like the changes.

#3861 would be obviously great to have.

@markus2330
Copy link
Contributor Author

@mpranj Is there something still open? Do you need help?

@mpranj
Copy link
Member

mpranj commented Jun 7, 2021

Is there something still open? Do you need help?

No, everything is merged. I am having some troubles with docker hub timeouts in the release pipeline. I hope a fresh re-run solves it, and I rebooted the agents.

@markus2330
Copy link
Contributor Author

Perfect! #3882 can also merged afterwards as it is (nearly) only docu. I don't think we can/should wait for a fix of #3881.

What is the state of the release notes?

@mpranj
Copy link
Member

mpranj commented Jun 7, 2021

What is the state of the release notes?

Release notes are basically done and waiting for the release pipeline to finish so we can add the artifacts and hashsums. The release pipeline will definitely take 2 hours or so, so I have time to polish release notes right now. If you want to add anything it is still possible ;)

@robaerd
Copy link
Member

robaerd commented Jun 7, 2021

The release pipeline failed because the warning plugin probably changed its method names.

java.lang.NoSuchMethodError: No such DSL method 'warnings' found among steps ...

Therefore the Docker image publish stage of the release pipeline wasn't executed.

@mpranj
Copy link
Member

mpranj commented Jun 7, 2021

The release pipeline failed because the warning plugin probably changed its method names.

Thank you for pointing this out. We migrated to a new warnings plugin as the old one was deprecated. I already have a fix for this and will re-run the stage once the main release is published on the website.

@mpranj
Copy link
Member

mpranj commented Jun 7, 2021

Small update: the release is basically published, but not yet announced due to small changes in release notes and re-running the release pipeline with a small fix.

@mpranj
Copy link
Member

mpranj commented Jun 7, 2021

Unfortunately re-running the release pipeline with a new build number does not fix the errors and does not work as I expected.

We would have to push a completely new release to make everything green (which is not worth the time).

Currently when we re-release with a new build number the main tarball 0.9.6 (on the FTP) was overwritten, so I had to change the hashsums in the release notes. I think the scenario with re-building a 0.9.6-x release is currently not usable and GitHub also does not let us use a 0.9.6-1 tag for a release version.

Maybe we just do 0.9.7 sooner?

@markus2330
Copy link
Contributor Author

@mpranj Nevertheless, great work, thank you for getting it out! Important is that we got (source) packages and we have a new tag to refer to. 🎆

which is not worth the time

What are still the things that take soo much time? Can you list them so that we can prioritize #3519? Having smooth and fast releases is imho more important than even more features in CI. 🚀

Maybe we just do 0.9.7 sooner?

Yes, let us do 0.9.7 sooner! 💖

@robaerd
Copy link
Member

robaerd commented Jun 7, 2021

Unfortunately re-running the release pipeline with a new build number does not fix the errors and does not work as I expected.

The build-documentation stage failed again because of the old warning plugin.
Additionally the publish-maven-artifacts stage now failed, because no patch/revision is included in the version tag, i.e. it only was published as 0.9.6 instead 0.9.6-2 and therefore conflicting with the old version. This could be solved by simply adding a revision parameter (as we already have for the different packages).

Maybe we should also extract the git-push stage of the main repository into an own stage and run it as last stage, so the release pipeline can be re-run as often as wanted if an error occurred, without having to worry about the git-tag issue?

@mpranj
Copy link
Member

mpranj commented Jun 7, 2021

What are still the things that take soo much time?

I'm just saying it's not worth anyone's time to do 0.9.7 today just so Jenkins has green icons. Everything else worked fine.

Otherwise: the automation is great, but the pipelines take long. Let's say you prepare a release and write some release notes. You have to wait at least:

  • 1,5hrs for a master build to finish (after you wrote inital release notes and bumped versions)
  • 2 hrs for release-pipeline
  • 1,5hrs for master build to finish (after you write the final release notes with hashsums, so that the website now shows the release)

So you need roughly 5 hours to push thorough the release. If any stage fails, which usually happens, you'll need more like double the time.

@mpranj
Copy link
Member

mpranj commented Jun 7, 2021

The build-documentation stage failed again because of the old warning plugin.

That should be fixed now, but I don't want to re-run it...

@kodebach
Copy link
Member

kodebach commented Jun 7, 2021

GitHub also does not let us use a 0.9.6-1 tag for a release version.

It should be possible, other repos use similar versions. Although according to semantic versioning a version like x.y.z-ABC is a pre-release version of x.y.z, where ABC identifies the version and lexicographical order determines the order of different pre-releases of a single x.y.z version (commonly used are e.g. 1.0.0-alpha01, 1.0.0-alpha02, 1.0.0-beta01, etc.)

You have to wait at least: [...]

Do we really need a second master build for the updated hashsum? Can't we put a place holder in before we trigger the release pipeline and let the release pipeline replace the hashsum and push to Github?

@mpranj
Copy link
Member

mpranj commented Jun 8, 2021

Do we really need a second master build for the updated hashsum? Can't we put a place holder in before we trigger the release pipeline and let the release pipeline replace the hashsum and push to Github?

We can, but the result is the same: the master pipeline needs to run through again, to rebuild the website and show the news.

@kodebach
Copy link
Member

kodebach commented Jun 8, 2021

Couldn't the release pipeline update the website directly? I assume whatever placeholder we use will just appear in some HTML file were it would be easy to replace.

@mpranj
Copy link
Member

mpranj commented Jun 18, 2021

Couldn't the release pipeline update the website directly? I assume whatever placeholder we use will just appear in some HTML file were it would be easy to replace.

Appending the website build directly to the release pipeline would already save some time. (We avoid running the whole master pipeline)

I'm not a huge fan of editing auto-generated HTML files, but yes, that would be the fastest option.

@kodebach
Copy link
Member

Just building the website also should take very long, I assume. So yes, that would probably be better than "hacking" something into the generated HTML.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants