-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to bind ports: Docker-for-Windows & Hyper-V excluding but not using important port ranges #3171
Comments
Solution in googlevr/gvr-unity-sdk#1002 works for me but not ideal |
That workaround does not work for me, unfortunately, despite having admin rights. |
@veqryn the workaround worked for me, the steps are:
when your system is back, you will be able to bind to that port successfully. |
There was an obscure Docker error when trying to start an Electrum server in tests. [1] It appears that there is a conflict between Docker and Hyper-V on some range of ports. A workaround is to just change the port we were using. [1] docker/for-win#3171
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so. Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. |
/remove-lifecycle stale |
* Fix eclair-cli to work with equal sign in arguments (#926) * Fix eclair cli argument passing * Modify eclair-cli to work with equals in arguments * Eclair-cli: show usage when wrong params are received * Remove deprecated call from eclair-cli help message [ci skip] * Make Electrum tests pass on windows (#932) There was an obscure Docker error when trying to start an Electrum server in tests. [1] It appears that there is a conflict between Docker and Hyper-V on some range of ports. A workaround is to just change the port we were using. [1] docker/for-win#3171 * API: fix fee rate conversion (#936) Our `open` API calls expects an optional fee rate in satoshi/byte, which is the most widely used unit, but failed to convert to satoshi/kiloweight which is the standard in LN. We also check that the converted fee rate cannot go below 253 satoshi/kiloweight. * Expose the websocket over HTTP GET to work properly with basic auth (#934) * Expose the websocket over HTTP GET * Add test for basic auth over websocket endpoint * Set max payment attempts from configuration (#931) With a default to `5`. * Add a proper payments database (#885) There is no unique identifier for payments in LN protocol. Critically, we can't use `payment_hash` as a unique id because there is no way to ensure unicity at the protocol level. Also, the general case for a "payment" is to be associated to multiple `update_add_htlc`s, because of automated retries. We also routinely retry payments, which means that the same `payment_hash` will be conceptually linked to a list of lists of `update_add_htlc`s. In order to address this, we introduce a payment id, which uniquely identifies a payment, as in a set of sequential `update_add_htlc` managed by a single `PaymentLifecycle` that ends with a `PaymentSent` or `PaymentFailed` outcome. We can then query the api using either `payment_id` or `payment_hash`. The former will return a single payment status, the latter will return a set of payment statuses, each identified by their `payment_id`. * Add a payment identifier * Remove InvalidPaymentHash channel exception * Remove unused 'close' from paymentsDb * Introduce sent_payments in PaymentDB, bump db version * Return the UUID of the ongoing payment in /send API * Add api to query payments by ID * Add 'fallbackAddress' in /receive API * Expose /paymentinfo by paymentHash * Add id column to audit.sent table, add test for db migration * Add invoices to payment DB * Add license header to ExtraDirective.scala * Respond with HTTP 404 if the corresponding invoice/paymentHash was not found. * Left-pad numeric bolt11 tagged fields to have a number of bits multiple of five (bech32 encoding). * Add invoices API * Remove CheckPayment message * GUI: consume UUID reply from payment initiator * API: reply with JSON encoded response if the queried element wasn't found * Return a payment request object in /receive * Remove limit of pending payment requests! * Avoid printing "null" fields when serializing an invoice to json * Add index on paymentDb.sent_payments.payment_hash * Order results in descending order in listPaymentRequest * Electrum: do not persist transaction locks (#953) Locks held on utxos that are used in unpublished funding transactions should not be persisted. If the app is stopped before the funding transaction has been published the channel is forgotten and so should be locks on its funding tx utxos. * Added a timeout for channel open request (#928) Until now, if the peer is unresponsive (typically doesn't respond to `open_channel` or `funding_created`), we waited indefinitely, or until the connection closed. It translated to an API timeout for users, and uncertainty about the state of the channel. This PR: - adds an optional `--openTimeoutSeconds` timeout to the `open` endpoint, that will actively cancel the channel opening if it takes too long before reaching state `WAIT_FOR_FUNDING_CONFIRMED`. - makes the `ask` timeout configurable per request with a new `--timeoutSeconds` - makes the akka http timeout slightly greater than the `ask` timeout Ask timeout is set to 30s by default. * Set `MAX_BUFFERED` to 1,000,000 (#948) Note that this doesn't mean that we will buffer 1M objects in memory: those are just pointers to (mostly) network announcements that already exist in our routing table. Routing table has recently gone over 100K elements (nodes, announcements, updates) and this causes the connection to be closed when peer requests a full initial sync. * Fix Dockerfile maven binary checksum (#956) The Maven 3.6.0 SHA256 checksum was invalid and caused the docker build to fail. * Add channel errors in audit db (#955) We now keep track of all local/remote channel errors in the audit db. * Added simple plugin support (#927) Using org.clapper:classutil library and a very simple `Plugin` interface. * Live channel database backup (#951) * Backup running channel database when needed Every time our channel database needs to be persisted, we create a backup which is always safe to copy even when the system is busy. * Upgrade sqlite-jdbc to 3.27.2.1 * BackupHandler: use a specific bounded mailbox BackupHandler is now private, users have to call BackupHandler.props() which always specifies our custom bounded maibox. * BackupHandler: use a specific threadpool with a single thread * Add backup notification script Once a new backup has been created, call an optional user defined script. * Update readme with bitcoin 0.17 instructions (#958) This has somehow been missed by PR #826. * Backup: explicitely specify move options (#960) * Backup: explicitely specify move options We now specify that we want to atomically overwrite the existing backup file with the new one (fixes a potential issue on Windows). We also publish a specific notification when the backup process has been completed. * Print stack trace when crashing during boot sequence (#949) * Print stack trace when crashing during boot sequence * Use friendly message when db compatibility check fails * ElectrumWallet should not send ready if syncing (#963) This commit is already embedded in version `0.2-android-beta22`. * Channel: Log additional data (#943) * Channel: Log additional data Log local channel parameters, and our peer's open or accept message. This should be enough to recompute keys needed to recover funds in case of unilateral close. * Electrum: make debug logs shorter (#964) * Better handling of closed channels (#944) * Remove closed channels when application starts If the app is stopped just after a channel has transition from CLOSING to CLOSED, when the application starts again if will be restored as CLOSING. This commit checks channel data and remove closed channels instead of restoring them. * Channels Database: tag closed channels but don't delete them Instead we add a new `closed` column that we check when we restore channels. * Document how we check and remove closed channels on startup * Do not print the stacktrace on stderr when there is an error at boot (#966) * Do not print the stacktrace on stdout when there is an error at boot * Fix flaky test in PaymentLifecycleSpec (#967) * Use local random pamentHash for each test in paymentlifecyclespec, intercept the route request before the router. * Rename `eclair.bak` to `eclair.sqlite.bak` (#968) This removes any ambiguity about what the content of the file is about. * Fixed concurrency issue in `IndexedObservableList` (#961) Update map with new indexes after element is removed Fixes #915 * Various fix and improvements in time/timestamp handling (#971) This PR standardizes the way we compute the current time as unix timestamp - Scala's Platform is used and the conversion is done via scala's concurrent.duration facilities - Java's Instant has been replaced due to broken compatibility with android - AuditDB events use milliseconds (fixes #970) - PaymentDB events use milliseconds - Query filters for AuditDB and PaymentDB use seconds * API: Support query by `channelId` or `shortChannelId` everywhere (#969) Add support for querying a channel information by its `shortChannelId`. * Smarter strategy for sending `channel_update`s (#950) The goal is to prevent sending a lot of updates for flappy channels. Instead of sending a disabled `channel_update` after each disconnection, we now wait for a payment to try to route through the channel and only then reply with a disabled `channel_update` and broadcast it on the network. The reason is that in case of a disconnection, if noone cares about that channel then there is no reason to tell everyone about its current (disconnected) state. In addition to that, when switching from `SYNCING`->`NORMAL`, instead of emitting a new `channel_update` with flag=enabled right away, we wait a little bit and send it later. We also don't send a new `channel_update` if it is identical to the previous one (except if the previous one is outdated). This way, if a connection to a peer is unstable and we keep getting disconnected/reconnected, we won't spam the network. The extra delay allows us to remove the change made in #888, which was a workaround in case we generated `channel_update` too quickly. Also, increased refresh interval from 7 days to 10 days. There was no need to be so conservative. Note that on startup we still need to re-send `channel_update` for all channels in order to properly initialize the `Router` and the `Relayer`. Otherwise they won't know about those channels, and e.g. the `Relayer` will return `UnknownNextPeer` errors. But we don't need to create new `channel_update`s in most cases, so this should have little or no impact to gossip because our peers will already know the updates and will filter them out. On the other hand, if some global parameters (like relaying fees) are changed, it will cause the creation a new `channel_update` for all channels. * Fixed overflow issue with max duration (#975) This is a regression caused by #971, because `Duration` has a max value of `Long.MaxValue` *nanoseconds*, not *seconds*. * Use proper closing type in `ChannelClosed` event (#977) There was actually a change introduced by #944 where we used `ClosingType.toString` instead of manually defining types, causing a regression in the audit database. * Update bash autocompletion for eclair-cli (#983) * Update bash autocompletition file to suggest all the endpoints * Update list of commands in eclair-cli help message * Replace `UnknownPaymentHash` and `IncorrectPaymentAmount` with `IncorrectOrUnknownPaymentDetails` (#984) See lightning/bolts#516 and lightning/bolts#544 * Wireshark dissector support (#981) * Transport: add support for encryption key logging. This is the format the wireshark lightning-dissector uses to be able to decrypt lightning messages. * Enrich test for internal eclair API implementation (fr.acinq.eclair.Eclair.scala) (#938) * Add test to EclairImpl for `/send`, `/allupdates` and `/forceclose/` * Set default chain to "mainnet" (#989) Eclair is now configured to run on mainnet by default. * Set tcp client timeout to 20s (#990) So that it fails before the ask/api time out. * Add bot support for code coverage (codecov) (#982) * Add scoverage-maven-plugin dependency * Update travis build to generate a scoverage report * Add custom codecov configuration to have nice PR comments * Add badge for test coverage in readme * Accept `commit_sig` without changes (#988) LND sometimes sends a new signature without any changes, which is a (harmless) spec violation. Note that the test was previously not failing because it wasn't specific enough. The test now fails and has been ignored. * Ignore subprojects eclair-node/eclair-node-gui in the codecov report (#991) * Use bitcoind fee estimator first (#987) * use bitcoind fee provider first * set default `smooth-feerate-window`=6 * Configuration: increase fee rate mismatch threshold We wil accept fee rates that up to 8x bigger or smaller than our local fee rate * Updated license header (#992) * Release v0.3 (#994) * gui: include javafx native libraries for windows, mac, linux * Release v0.3 * Set version to 0.3.1-SNAPSHOT * Improved test coverage of `io` package (#996) * improved test coverage of `NodeURI` * improved test coverage of `Peer` * Fix TextUI * BackupHandler: use renameTo() on Android Most Path methods are not available at our current API level
What's the status for this? Today I had 100 of reservations which caused Skype for Business to stop working since it couldn't find any available ports. Uninstall docker/hyper-v/containers removed these reservations and Skype for Business had ports to work with again. This is a critical error that should be focused on since it reserves so many ports that isn't in use. I can't uninstall docker/hyper-v/containers or similar workarounds each time I get problems with Skype for Business in conjunction with meetings. |
This makes hyper-v unusable for me and most of my company. |
@enashed Does your solution (disabling and re-enabling HyperV) have any side effects? Will my virtual switches and virtual machines still be there after applying your solution? |
Answering myself: This actually does have side effects. It deletes your virtual switches. You should keep that in mind when applying this solution. |
this issue is still present |
Hi guys, I've the same problem. This is really blocking me. |
@enashed's answer worked for me perfectly! Thanks! |
IntelliJ idea community edition doesn't start because it tries to bind first available the port in the range 6942-6991. And following command shows this port range is reserved/blocked by hyper-v/docker-for-windows. Frankly I don't know whether it's because of docker or some other app.
Protocol tcp Port Exclusion Ranges Start Port End Port
|
The "hns" service is very... greedy.
|
This is /obviously/ not docker's problem (as best I can tell), it's probably not even hyperv's. Commenting here as this seems to be a frustrating and common end of journey for googlers. What follows is at least "one" of the resolutions/explanations. On one of my machines the dynamic port range was not updated to the "new" start port, and I guess related to a resolved bug in windows has now "exposed" this as a serious problem (e.g.: I couldn't even bind to port 3000 for node dev -- access denied is I think a valid response, but it's not the typical "port in use" root cause). Current dynamic port config:
To set it to the current config: (Likely a reboot of your host is required.) While it's bizarre that I only just ran into this issue less than 4 hours ago -- been doing docker/node/go dev for the last few months straight, using docker edge, etc. This appears to have resolved my port exclusion issues (I have no large ranges of reserved ports below 50000 now, previously had 1000 port range exclusions all over the place.) |
@crozone As a docker desktop end user, I'd like docker to take care that ports it is using are really useable for docker. |
Also getting this problem. My RabbitMQ image could not bind to a random of 4 different ports. The winnat solution below worked
But as stated: no subsequent internet in WSL thus I can't monitor kube pods or whatever. |
At the very least, docker should report an error that it can't get the
ports it needs and not fail silently.
…On Mon, Mar 13, 2023, 13:02 Mark Ernst ***@***.***> wrote:
Also getting this problem. My RabbitMQ image could not bind to a random of
4 different ports. The winnat solution below worked
I fix this issue by running:
net stop winnat
net start winnat
But after that i have no internet in wsl
But as stated: no subsequent internet in WSL thus I can't monitor kube
pods or whatever.
—
Reply to this email directly, view it on GitHub
<#3171 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABG4PEBGQEXVZCHVISW6TTW34EGDANCNFSM4GM6VP3A>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@pmorch defenitly not silent, no. But there is a big red bar with the error up there though, which is enough for me to notice it. Still strange because the ports are open but claimed by WSL for some reason. Definite problem since the new WSL. |
@ReSpawN : Ok perhaps with the newest version it does show an error. I totally expect Thankfully I don't use Windows any longer, so this is no longer an itch of mine. |
This bug gave me a lot of headaches for a year and I just found this issue. I had to disable Hyper-V every time I needed a program to work. Hyper-V is supposed to make development easier, not harder! |
Example solution:
|
The |
but after that, i don't have access to internet in wsl. |
Weird, it worked for me? I havent had the problem since but I'll try and verify when it pops up again. |
How is this still not solved.... very frustrating problem. |
I'm having the same issue. But sometimes it says "The service is starting or stopping. Please try again later" after I run "net stop winnat", then I can't make it start again, and I get stuck there. :( |
Still encouner this bug from time to time, Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:9000 -> 0.0.0.0:0: listen tcp 0.0.0.0:9000: bind: An attempt was made to access a socket in a way forbidden by its access permissions. but following this can fix it: |
SOLUTION: The right solution TLDR versionThe correct solution is simply to reset the “TCP Dynamic Port Range” so that Hyper-V only reserves ports in the range we have set. You can reset the “TCP Dynamic Port Range” to 49152–65535 by running the following command with administrator privileges, but you can also change it to a smaller range if you think it is too large.
The article discusses a problem encountered while using Docker. The issue involves ports of Docker containers not working, leading to a common error related to access permissions. The root cause is attributed to the "TCP dynamic port range" in Windows, with potential conflicts arising from Hyper-V's reserved port numbers. The article acknowledges that while stopping and starting the Winnat service may offer a temporary solution, it explicitly states that this method is essentially a simplified version of rebooting the computer. The author points out that relying on this approach may not guarantee a resolution, as the probability of success is random, with some users reporting it works while others say it doesn't. The article strongly recommends against this method as a reliable or optimal solution due to its unpredictable nature. |
BB0297BB-C287-4F0B-A007-72B5F2D7BD72/20190102235413
Expected behavior
Be able to bind specific ports that I have always used.
Be able to specify which ports docker/hyperv exclude or use, and/or I expect that docker/hyper-v actually use the ports that it is excluding and that they show up in
netstat -ano
as being used or listened on.Actual behavior
If I start a service that binds on port 50051 (it is a grpc service, and that is the traditional port used by grpc), it says:
listen tcp :50051: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
Information
Steps to reproduce the behavior
My own investigation:
I was extremely confused by this problem, because I was able to bind other ports, such as 8080 or 60000, yet it did not appear that 50051 was in use by anything on my system.
Running
netstat -ano
shows nothing using 50051.Running
Get-NetTCPConnection
in powershell with admin privileges shows nothing using 50051.Even if I disconnect from the internet and disable both windows firewall and my antivirus, and run everything as admin, I still get the errors.
After hours of google searching, I found a command that showed what happened to 50051:
It seems that 50051 is excluded (whatever that means?!), even though it isn't in use by anything.
After lots of trial and error, I discovered that Docker for Windows and Hyper-V are responsible for all of those excluded port ranges above.
It also seems like all those port ranges change or increase by 1 every time I reboot, so I suppose 450 reboots from now my problem will go away, maybe...
I have never had this problem, despite using docker for years now.
I run lots of containers and setups that other people at my company work on and rely on, so it is not feasible for me to be changing the ports around on them to work around this issue. (Other people use the kube templates and docker-compose, and some of them connect with other docker-compose networks, etc, and expect things on certain ports.)
When I try to delete that excluded port range, I get this, despite running the command as administrator:
The text was updated successfully, but these errors were encountered: