-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve handling of platform config #724
Comments
Does this have to be done? I don't think the current pattern is all that unreasonable. Apps each have their own config, core has it's config. |
If we start actively changing config options often it seems like it will lead to more breakage... I'd prefer reasonable stability (keep things as-is), with the expectation that cfe_platform_cfg should typically just work (except when we do actually change the core). If a mission does want to go off on their own they certainly can, but the core framework doesn't seem to me like it needs a "fix". |
Not changing anything may be an option, but there are some concerns:
This isn't currently an issue, but it will become more of an issue if we really want to modularize the CFE core components further and allow mission replacement of these.
The "sample_app" is possibly a bad example as it only has a single perf ID in its mission config, but still, it is something that the users are expected to customize (because each mission individually manages perf IDs) but currently the only way to do so is by editing the Something like the "FM" CFS app is possibly a better example, which has lots of platform scope options here: https://github.com/nasa/FM/blob/master/fsw/platform_inc/fm_platform_cfg.h These include things related to its structure sizes which are not always easy to put into a loadable config table. If a user wants to customize then currently the only option is to edit the file directly. Furthermore we don't have the ability to use a different app config for different mission config which might be a logical expectation.
Obviously the better/preferred solution is to not include The workaround that's currently there was only ever supposed to be temporary for backward compatibility until apps could get updated but its been about 5-6 years and some apps still aren't updated. For msgids the goal was to get them out of a database of some type but that probably won't be anything near term either. In the meantime users are getting caught on this and we don't have a real solid solution/alternative. |
Some of these concerns can also be addressed by changing the patterns used in app CMake files, such as replacing direct refs like this in: The direct inclusion of the default |
How about implementing a general pattern where any header file prefixed with target name gets copied into a target specific include directory early in the search path (and update the name as needed), and anything prefixed with mission goes to the mission include directory? I'd think it would be useful to still fall back on the default if there isn't a specific one in the user config? Perspective is a change like this under the hood doesn't require updating all the app CMake files, and no user action required to get the default behavior. Maybe also a possible solution to the mission configured message header, if we broke up ccsds.h to separate the actual definition and no defines would need to clutter the code... I like this approach way better than adding the MESSAGE_FORMAT_IS_CUSTOM. |
Yes, I will try this out.... I'm not sure that simply catching every header file is the right way, but if we accept a general naming convention of:
Then we can generate wrapper headers and provide an include path to them, alleviating the need for apps to include their own platform_inc/mission_inc dirs, although we could still allow that as a default/fallback. |
Sounds good... could you make it APP_NAME or CFE_MODULE_NAME (or whatever so I can customize the msg header using the same mechanism?) |
OK, after digging in a bit, I think it can work for mission-scope stuff, but the problem (mismatch?) remains that we don't actually build apps on a per-config basis, we build apps on a per-toolchain basis. CFE core, OTOH, is built per-config, so (potentially) more than once per toolchain. We could move apps to be the same way, but it'll involve moving some logic around, and it will affect the resulting build tree, but in some ways it might makes sense to do that. It would reduce the modularity of apps to some degree, but maybe not a big concern. |
It would mean baking in the config name into the target name like is done for CFE (that's why its called e.g. |
The other option is to totally turn the build around and create a sub-build per config rather than per toolchain. While this would be a major change it might actually be more backward compatible/less disruptive than the former. |
I don't have strong opinions related to any of the options, but per config like the CFE core to be a consistent pattern sounds reasonable? Would this approach simplify/address any other user concerns? |
The existing build system built target executables grouped by toolchain as a proxy for CPU architecture + machine options/flags. The app binaries would be built once and copied to any/all targets sharing that toolchain. The side effect of doing this is that the application needs to be written in an CPU-agnostic manner, performing its subscriptions and configurations from runtime table data rather than hardcoded/fixed values. Unfortunately most apps are not coded that way, so workarounds were needed. This changes the top level process to include the "platform" within this target build logic, effectively treating different platform configs as entirely different builds, even if they share the same toolchain file. As a result, binaries will only be shared between targets that explicitly set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
The existing build system built target executables grouped by toolchain as a proxy for CPU architecture + machine options/flags. The app binaries would be built once and copied to any/all targets sharing that toolchain. The side effect of doing this is that the application needs to be written in an CPU-agnostic manner, performing its subscriptions and configurations from runtime table data rather than hardcoded/fixed values. Unfortunately most apps are not coded that way, so workarounds were needed. This changes the top level process to include the "platform" within this target build logic, effectively treating different platform configs as entirely different builds, even if they share the same toolchain file. As a result, binaries will only be shared between targets that explicitly set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
The existing build system built target executables grouped by toolchain as a proxy for CPU architecture + machine options/flags. The app binaries would be built once and copied to any/all targets sharing that toolchain. The side effect of doing this is that the application needs to be written in an CPU-agnostic manner, performing its subscriptions and configurations from runtime table data rather than hardcoded/fixed values. Unfortunately most apps are not coded that way, so workarounds were needed. This changes the top level process to include the "platform" within this target build logic, effectively treating different platform configs as entirely different builds, even if they share the same toolchain file. As a result, binaries will only be shared between targets that explicitly set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
The existing build system built target executables grouped by toolchain as a proxy for CPU architecture + machine options/flags. The app binaries would be built once and copied to any/all targets sharing that toolchain. The side effect of doing this is that the application needs to be written in an CPU-agnostic manner, performing its subscriptions and configurations from runtime table data rather than hardcoded/fixed values. Unfortunately most apps are not coded that way, so workarounds were needed. This changes the top level process to include the "platform" within this target build logic, effectively treating different platform configs as entirely different builds, even if they share the same toolchain file. As a result, binaries will only be shared between targets that explicitly set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
The existing build system built target executables grouped by toolchain as a proxy for CPU architecture + machine options/flags. The app binaries would be built once and copied to any/all targets sharing that toolchain. The side effect of doing this is that the application needs to be written in an CPU-agnostic manner, performing its subscriptions and configurations from runtime table data rather than hardcoded/fixed values. Unfortunately most apps are not coded that way, so workarounds were needed. This changes the top level process to include the "platform" within this target build logic, effectively treating different platform configs as entirely different builds, even if they share the same toolchain file. As a result, binaries will only be shared between targets that explicitly set the "TGTx_PLATFORM" setting in targets.cmake to the same value.
Is your feature request related to a problem? Please describe.
Almost every app, including the CFE core apps, have some sort of "platform scope" internal config options. And the way we handle this for apps and external entities is currently different than the way we handle this for CFE core. To move forward we need to consolidate this into a single, consistent method that can be applied for both external apps and core apps.
Describe the solution you'd like
CMake already generates the
cfe_platform_cfg.h
file so with some tweaks we can get it to work for everything.There are several possible approaches to consider:
Option 1: Do we generate a single "monolithic" platform header file and let all apps include it?
Option 2: Do we generate a per-app "focused" platform header file which is only used by that app?
Advantage: Cleaner, Better scoping, Only give apps/modules a header file containing their own config items, they can't use what they can't see, and thereby can't introduce unexpected ABI dependencies.
Disadvantage: Would probably need to be a different name, as we can't call everything "cfe_platform_cfg.h" (too confusing), and would probably (eventually) require breaking up the current cfe_platform_cfg.h into a config file per core app (es_platform_cfg.h, evs_platform_cfg.h, etc). In the current CFE core there are examples of cross-pollination too, where EVS uses data structures defined by ES which are based on platform config. So these become undocumented/uncontrolled ABI dependencies. We'd have to fix those.
Additional context
Option 2 is cleaner but arguably more work, might take a little longer to implement, and have a bigger impact on apps.
This type of issue is coming more to the forefront when considering things like #554, but there have been periodic issues posted in the past regarding the "weirdness" around the way
cfe_platform_cfg.h
is handled, so it would be good to generally fix that too, but need to get some sort of community consensus before implementing anything.Requester Info
Joseph Hickey, Vantage Systems, Inc.
The text was updated successfully, but these errors were encountered: