Currently, and still in a one-man-army mode, the code for next verson (6.1) could be found hhereere. There's a big difference between 6.0 and 6.1 - it's more standardized and PSR-4 complient. However, the goal is to keep compatibility to 6.0 so that developers can upgrade to proper code without ripping their hair off their heads. 6.1 will be merged into the master branch as soon as it has been properly tested which is primarily done with Bamboo, Pipelines and pipelines with regular ecommerce systems.
Requirements and recommended addons
- curl - not necessary, package will fail over to streams without curl.
- ssl - not necessary, but you will proceed without https support.
- soap/xml - Only necessary if you tend to use XML and SoapCalls.
- DOMDocument - Only necessary if you tend to use reading of plain html.
- laminas - Only necessary if you tend to use extended RSS reading or read DOMDocuments with an extension. Package will fail over to xml without laminas.
|Branch||Development started||Initial Release||Active Support Until||Maintenance||PHP Support|
|5.0||2016-12-16||Never||2017-08-17||2017-08-17||5.4 - 7.4|
Follows 6.0 branch.
(5.3) 5.4 - 7.4
According to commit b4dea50fd24
5.6 - 8.0 (From >6.1.1)
Autogenerated docs can be found, daily, at https://gitreport.tornevall.net/tornelib-php-netcurl-6.1/.
To keep compatibility with v6.0 the plan is to keep the primary class MODULE_CURL callable from a root position. It will probably be recommended to switch over to a PSR friendly structure from there, but the base will remain in 6.1 and the best way to instantiate the module in future is to call for the same wrapper as the main MODULE_CURL will use - NetWrapper (TorneLIB\Module\Network\NetWrapper) as it is planned to be the primary driver handler.
NETCURL is a network library with a backstory in the simplicity to find and extract lists of ip-addresses from websites, utilize them and register them as proxies of different types. In short, NETCURL was once built for the DNSBL API.
NetCURL has changed purpose over time. What was planned to be scraping tool became, thanks to the self selective engine that was added, a tool to automatically fetch and parse data and transform it to standard output arrays/objects. The implementation supported common communications including SOAP and partially PHP built in streams though extra drivers - and it was autoselective. If the primary driver wasn't present, it jumped to next one and so on.
The intentions with NETCURL was probably never to rebuild the wheel. Many solutions already did this, but backfired somewhere when it came to setting the utilities up. Very often developers had to make all configurations manually (Both Guzzle and Zend does this), while NETCURL was firing up a high-verbose-level-fetching communications driver automatically. In our case, we wanted to utilize as much packages as possible and choose the best preferred method without asking question. The goal? Standard output, ready to be fetched by the endpoint developer.