-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TF2] [x64_linux_test] [Feature Request] Increase edict limit from 2048 to 8192 #5447
Comments
Moving to 64-bit is going to allow us to do this in the future -- but I would like to not do it immediately with the release of the 64-bit port to avoid any other issues that brings up. |
I did a stress test with some people. Our experience was that the multiplayer experience with 100 players (some bots) + boss entities was pretty smooth for everyone (except the host, which wasn't running the dedicated server). The entity limit was the only major limitation. I'm not particularly connected to the TF2 community (we were just testing the performance of the new Linux client), but I can definitely see the benefit of increasing this limit. It's nice to hear that x64 is the first step towards that :) |
Quick question, how would a 64-bit port affect perfomance? shadows specifically? what would change? |
They should implement a better entities system first. They are pre-allocating the entity list in the client.dll itself, and because of 64 bit pointers now, the pre-allocated entity list is HUGE. Because of this pre-allocated list, there is always gonna be a limit, and the higher you set it, the higher the memory usage and file size. I think they should go for a safer approach and create a dynamic list for entities (kinda like the |
i dont think this is a very good idea |
Why? It's a way more reliable approach |
can you tell me the difference in memory in bytes note that im not asking for like "it's a huge difference", im asking for your estimate in bytes |
Please forgive my verbose and long-winded counter-argument (I use this term in the gentlest way possible, as this is more cautionary/informative than a concentrated effort to shoot down an innocent suggestion.) I believe reliability, as quantified by performance profiling, game stability and memory management, is already being best served by static, fixed-length allocation. A more experienced programmer can correct me if my naive generalization is wrong, but: Aside from sacrificing determinism(!), whatever minor memory savings you initially achieve, you'll end up paying back later, plus interest. Things like C#'s |
I can kinda see that, but may I remind you that Source 1 was released in 2004, and modern hardware (even the worst ones) is uncomparable to back then. We don't have to have perfect memory management that much anymore.
I don't wanna be an asshole either, but do I have to remind you of the current state of TF2's source code? |
But your suggestion was in the domain of runtime safety and reliability, no? Low-level changes to entity memory management would either require TF2 have its own branch of Source 2013, or else that other games inherit all changes. I guess with high confidence that this constitutes a blocker, due to maintenance burden or guaranteed breakages alone. While not an insider, I do not believe a complexifying, low-level change to something like memory management can feasibly be bolted onto a large, existing base like this. Even if the source resembles a proverbial Tower Of Babel, it's "stable" to anyone using it. Legacy software, even. The breaking changes would warrant a new major version. The refactor or rewrite you want, assuming it suits your definition of modernization, may well be Source 2. I specifically avoided mentioning Source 2 as an example of hunk/pool memory management, because unlike Source 1 and the early Quake games, its source tree is not currently available to read. I cannot claim nor prove how Source 2 manages memory. However, contiguous storage and cache locality are still prioritized by designs of other contemporary engines with source available, like Unreal. The gulf between RAM and CPU cache speeds has only widened since 2004. Determinism and an easy, contiguous memory layout for vectorization have only grown more important. These aspects are provably not outdated, more the opposite. If your primary concern is regarding reported RAM usage, like the pool seeming too bloated when there are not many edicts, you may instead want to look into specialized containers like sparse sets, as opposed to dynamic arrays. But if a user starts a process and still exhausts their memory shortly after, isn't the ultimate outcome the same? Fragmentation and overhead would still negate the paltry tens of megabytes' worth of difference, even growing the process's memory beyond what it would otherwise have been. I do not intend to appear overly stern nor antagonistic, as I "learned" these details of misusing dynamic arrays very early on, by my own fault, the hard way. While I agree with your intent of shrinking the memory footprint as much as possible, that particular approach only achieves the opposite. |
Right, I kinda forgot that was a part of Source itself. I wonder if we'll ever get an official TF2 from a modern engine lol. |
No worries. It's a complex problem with a massive scope. Maybe if Source 2 ever got an SDK release analogous to the HL2:MP one in from the 2007/2009/2013 base, we'd at least see a proof of concept. |
Ficool mentioned in redsun.tf discord that edict limit increase is planned for TF2 in future
I believe now is right time to tackle this so that community servers and community mod developers such as SourceMod can safely test things under x64_linux_test branch and report any issues be it vanilla TF2 or modded TF2 as a result of this.
It's easier to rewrite things once than twice, than how it would be if 64-bit update and edict increase update would be seperate major updates.
The text was updated successfully, but these errors were encountered: