-
Notifications
You must be signed in to change notification settings - Fork 53
Component split #93
Comments
This makes a ton of sense as far as I'm concerned; it's exactly what we're doing in zend-mvc currently. I'd recommend:
With regards to the |
👍 It's the logical continuation of the hard work done by @weierophinney. |
@weierophinney sorry for the late response. Nice to hear your ACK. There are some details I like to discuss before:
Cheers |
@weierophinney @Ocramius Today I had some time and moved already a lot of repos from zend-cache into it's own.
Cheers |
Sorry, I didn't join the discussion before. Yes, this looks good, especially if we can manage to make the extension dependencies required when picking single packages (that would require requesting the single packages via composer, manually). Unsure if the I'm also not sure if it's worth splitting out cache components that don't require extensions... |
@Ocramius Thanks for your comment! I have moved the serializer plugin into another package as it requires the
It's all about splitting out the parts require special dependencies. I have one exception only, the The following packages also needs to be moved:
Everything else should be kept in the main repo as it doesn't have special dependencies. Right now I have one problem that I don't know how to solve: StorageFactory::factory([
'adapter' => [
'name' => 'apc',
'options' => ['ttl' => 3600],
],
'plugins' => [
'exception_handler' => ['throw_exceptions' => false],
],
]); But how can I register the services on installation using the |
Makes sense
Can't we just use the |
@Ocramius There are actually the following problems:
|
@Ocramius Also moved the zend-server adapters to an own repo and after several hours I could make the tests run on travis by installing Zend-Server, rewriting phpunit start script and testing over HTTP call :) @weierophinney Please could you take a look at https://github.com/marc-mabe/zend-cache-zendserver/blob/master/.travis.yml and validate if that is ok and allowed how I run the tests. Thanks. |
I love the idea of separating out the various storage adapters to their own repositories. That way users are only getting code related to the adapter(s) they use, and we can have much easier, more targeted instructions on contributing (typically via vagrant and/or docker). I've looked at the zend-cache-zendserver Travis setup, and it looks reasonable. Would this split target a v3 release, then? And I assume we'd want each adapter brought into the zendframework organization, and tagged with a stable 1.0.0 release before removing them here, correct? |
Thanks for review 😃
Yea, also I have one or two other things I want to archive for 3.0 because small BC break.
Yes, of course before tagging a stable version of |
Does anyone have an idea or knows someone how would be able to test the WinCache adapter. |
@marc-mabe The only thing I can think of is to use a Windows vagrant image as a base, and then have the Vagrantfile download and install PHP to it. IIRC, Travis has windows images, too, which might make this possible there. |
As the support for |
@boesing We can when we bump to PHP 7.1. |
This repository has been closed and moved to laminas/laminas-cache; a new issue has been opened at laminas/laminas-cache#7. |
I would like to get your opinions about splitting zend-cache into different components.
The reason for that is simply a testing and dependency hell.
In my opinion it makes sense to split parts into it's own repository as long as the part requires a non standard extension or another currently optional dependency.
Thoughts ?
This is a structure how this could look like:
zendframework/zend-cache
zendframework/zend-cache-serializer
zendframework/zend-cache-apc
zendframework/zend-cache-apcu
zendframework/zend-cache-dba
zendframework/zend-cache-memcache
zendframework/zend-cache-memcached
zendframework/zend-cache-mongo
zendframework/zend-cache-redis
zendframework/zend-cache-session
zendframework/zend-cache-wincache
zendframework/zend-cache-xcache
zendframework/zend-cache-zendserver
The text was updated successfully, but these errors were encountered: