Quantcast
Channel: Serverphorums.com
Viewing all 23908 articles
Browse latest View live

Re: Can HA-Proxy set an header when he "breaks" stick routing

0
0
On Thu, Mar 22, 2018 at 10:42 PM, Igor Cicimov <
igorc@encompasscorporation.com> wrote:

> Hi,
>
> On Thu, Mar 22, 2018 at 6:24 PM, Gisle Grimen <Gisle.Grimen@evry.com>
> wrote:
>
>> Hi,
>>
>>
>>
>> Thank you for your response.
>>
>>
>>
>> To be very precise the feature I am looking for from HA-Proxy is that
>> when HA-Proxy does a re-dispatch HA-Proxy also ads a Header, which will
>> tell the server receiving the request from HA-Proxy that HA-Proxy has done
>> a re-dispatch. This is the critical feature we are looking for.
>>
>>
>>
>> This feature will be important to both type 1 systems in order to
>> minimize the load on the shared session storage and important to type 3
>> systems in order to allow them to flush local caches of potential stale
>> data. Both of which are systems we run.
>>
>
> ​I see it makes more sense now, I missed this info I must have deleted
> half of the thread. Maybe inserting cookies by haproxy for example SERVERID
> with the value of the server name can help. It will have value of Server1
> for the first requests that have fell over to Server2 so checking the value
> will tell you it came from different server.
>

​Actually think haproxy will remove the cookie from the request before
sending the request to the backend server :-/ Maybe there is an option to
tell it not to but not sure.


>
>
>>
>> Best regards,
>>
>>
>>
>> Gisle
>>
>>
>>
>>
>>
>> *From: *Igor Cicimov <igorc@encompasscorporation.com>
>> *Date: *Thursday, 22 March 2018 at 07:48
>> *To: *Gisle Grimen <Gisle.Grimen@evry.com>
>> *Cc: *Willy Tarreau <w@1wt.eu>, "haproxy@formilux.org" <
>> haproxy@formilux.org>
>> *Subject: *Re: Can HA-Proxy set an header when he "breaks" stick routing
>>
>>
>>
>> Hi,
>>
>>
>>
>> On Wed, Mar 21, 2018 at 8:57 PM, Gisle Grimen <Gisle.Grimen@evry.com>
>> wrote:
>>
>> Hi,
>>
>> Il try to be more specific:
>>
>> The functionality I was looking for on HA-Proxy in connection with
>> sticky-routing is the following:
>>
>> Normal flow all servers up (this is functionality available today):
>> 1. HA-Proxy receives a request
>> 2. HA-Proxy checks the sticky table and determines that that request
>> should be sent to Server1
>> 3. HA-Proxy forwards the request to Server1
>>
>> Sticky Server is down: (this is functionality I would like HA-proxy to
>> have or figure out how to configure)
>> 1. HA-Proxy receives a request
>> 2. HA-Proxy checks the sticky table and determines that that request
>> should be sent to Server1
>> 3. HA-Proxy determines that Server1 is down and selects to send the
>> request to Server2
>> 4. HA-Proxy adds an HTTP header to the request. Example:
>> sticky-destination-updated=true
>> 5. HA-Proxy updates sticky table that further request from this source
>> from now on is sent to server to Server2
>> 6. HA-Proxy forwards the request to Server2
>>
>>
>>
>> ​It does have this of course, see https://cbonte.github.io/hapro
>> xy-dconv/1.7/configuration.html#4.2-option%20redispatch
>> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcbonte.github.io%2Fhaproxy-dconv%2F1.7%2Fconfiguration.html%234.2-option%2520redispatch&data=02%7C01%7CGisle.Grimen%40evry.com%7C5bdced70e5274464382508d58fc0f098%7C40cc2915e2834a2794716bdd7ca4c6e1%7C1%7C1%7C636572981204287566&sdata=a%2BGKy7VMI9OaxNHWEwNM%2FU%2Bh0B%2Ba00RX2nlVduesAN0%3D&reserved=0
>>
>> for example. If it didn't many implementations would be broken don't you
>> think?
>>
>>
>>
>> I must say though the use of that header you insist of is not really
>> clear to me except for maybe statistic purposes on the backend. You can
>> have two types of backends (in terms of sessions): 1) one where each server
>> is aware of each other sessions (shared session storage in memory or disk)
>> or 2) one where each server has its own sessions. There is third one where
>> no sessions are needed but that's not of interest here.
>>
>>
>>
>> The second case is the one for which you most probably need stickiness
>> for in which case if the Server1 one goes down and Haproxy re-distributes
>> its connections between Server2 and Serve3 lets say by definition those
>> servers will reset the sessions (since have no idea about them) and the
>> user will have to lets say log in again in the application on their side..
>>
>> Once done they will stick to the new server elected. Which brings me to
>> the point where I don't understand usage of the mentioned header in the
>> first place. Header or not what you need/want is going to happen anyway.
>>
>>
>>
>> In the first case with shared sessions, you can use stickiness as well if
>> you like but it is not critical as in the one described above. In which
>> case Server2 and Server3 will have knowledge of the Server1's sessions and
>> it will be business as usual.
>>
>> ​
>>
>>
>>
>> Next request from same source would be processed as follows on HA-Proxy
>> (assuming server3 is still up):
>> 1. HA-Proxy receives a request
>> 2. HA-Proxy checks the sticky table and determines that that request
>> should be sent to Server2
>> 3. HA-Proxy forwards the request to Server2
>>
>>
>>
>> ​That is already the case with Haproxy,
>>
>> ​
>>
>>
>> The assumption here is that selecting new sticky-ness target due to
>> existing sticky-ness server is not available is something that happens
>> rarely.
>>
>> What happen on the application when header is set:
>> The application will then flush all relevant local caches connected to
>> that user/session and so on, ensuring that the server does not work on
>> stale data.
>>
>> This allows one instance of an application to handle all request from one
>> user/session, which allows the application to apply aggressively caching of
>> data within the specific instance of the application. If for some reason a
>> request is forwarded by HA-proxy to another application instance, the
>> instance will be able to determine that instance switch has occurred and
>> can flush its potential stale cache entries.
>>
>> You get into issue here on the following case:
>> 1. You are first on server 1
>> 2. Some reason you are sent to server 2
>> 3. Some reason you are sent to server 1 again, which without the
>> described functionality we would risk that Server 1 operates on stale data
>>
>> This scenario is something that for example could happen during high load
>> situations.
>>
>> Best regards,
>>
>> Gisle
>>
>>
>> On 21/03/2018, 09:57, "Willy Tarreau" <w@1wt.eu> wrote:
>>
>> On Wed, Mar 21, 2018 at 08:20:44AM +0000, Gisle Grimen wrote:
>> > Hi,
>> >
>> > Thanks for the information. That was sad to hear. In our case the
>> traffic is
>> > coming from servers and not a web browser so solving this with
>> cookies are
>> > not an option. The communication between the servers are based on
>> > international standards as such we cannot add additional
>> requirements to the
>> > server sending the requests. As such we have to solve it within our
>> > infrastructure. With a little help from HA-proxy you could then
>> create very
>> > efficient local caches on each node, but without we need
>> complicated and
>> > resource intensive shared caches or databases.
>> >
>> > I hope this would be a feature that is possible to add in the
>> future as it
>> > would help to develop simpler and more efficient applications behind
>> > HA-Proxy, which in large part can rely in local caches.
>>
>> The problem I'm having is that you don't describe exactly what you're
>> trying to achieve nor how you want to use that information about the
>> broken stickiness, so it's very hard for me to try to figure a working
>> solution. I proposed one involving sending the initial server ID in a
>> header for example but I have no idea whether this can work in your
>> case.
>>
>> So could you please enlighten us on your architecture, the problem
>> that
>> broken stickiness causes and how you'd like it to be addressed ?
>>
>> Thanks,
>> Willy
>>
>>
>>
>>
>> --
>>
>> Igor Cicimov | DevOps
>>
>>
>>
>> [image: Image removed by sender.]
>>
>> p. +61 (0) 433 078 728 <0433%20078%20728>
>> e. igorc@encompasscorporation.com
>> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fencompasscorporation.com%2F&data=02%7C01%7CGisle.Grimen%40evry.com%7C5bdced70e5274464382508d58fc0f098%7C40cc2915e2834a2794716bdd7ca4c6e1%7C1%7C1%7C636572981204287566&sdata=EHGC6bAXJ3BZvIChfLGzkexiVgh%2B6%2FlOFy%2BnHnoDvn8%3D&reserved=0
>> w*.* www.encompasscorporation.com
>> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww..encompasscorporation.com&data=02%7C01%7CGisle.Grimen%40evry.com%7C5bdced70e5274464382508d58fc0f098%7C40cc2915e2834a2794716bdd7ca4c6e1%7C1%7C1%7C636572981204287566&sdata=TN5uub%2FILE1ERiIGYuT5BDFYT5ygXQT9U3POlNuyeEw%3D&reserved=0
>> a. Level 4, 65 York Street, Sydney
>> https://maps.google.com/?q=Level+4,+65+York+Street,+Sydney&entry=gmail&source=g
>> 2000
>>
>
>


--
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. igorc@encompasscorporation.com http://encompasscorporation.com/
w*.* www.encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000

Re: [PHP] PHP 7.1.16RC1 ready for testing

0
0
On 22.03.2018 at 05:39, The Doctor wrote:

> On Thu, Mar 15, 2018 at 10:07:11PM +0100, Joe Watkins wrote:
>> Afternoon everyone,
>>
>> PHP 7.1.16RC1 is available for testing at https://downloads.php.net/~ab
>>
>> Please report any bugs you find.
>
> Major bus and this will effect all upcoming version of PHP!
>
> Oniguruma 6.8.1 is now incompable.
>
> One of the structs is giving major grief.

Already fixed with
http://git.php.net/?p=php-src.git;a=commit;h=4072b2787074ee8e247a6639585b49e10c5a55fe.

--
Christoph M. Becker

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

[PHP-DEV] Re: [PHP] PHP 7.1.16RC1 ready for testing

0
0
On 22.03.2018 at 05:39, The Doctor wrote:

> On Thu, Mar 15, 2018 at 10:07:11PM +0100, Joe Watkins wrote:
>> Afternoon everyone,
>>
>> PHP 7.1.16RC1 is available for testing at https://downloads.php.net/~ab
>>
>> Please report any bugs you find.
>
> Major bus and this will effect all upcoming version of PHP!
>
> Oniguruma 6.8.1 is now incompable.
>
> One of the structs is giving major grief.

Already fixed with
http://git.php.net/?p=php-src.git;a=commit;h=4072b2787074ee8e247a6639585b49e10c5a55fe.

--
Christoph M. Becker

--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Re: [PHP-DEV] what's the official position on apache threaded environments

0
0
On Mi, 2018-03-21 at 22:52 -0700, Alice Wonder wrote:

> Is there a list somewhere of what the specific issues with using zts
> in multi-threaded apache are? What modules have known issues?
>
> I haven't found it.

PHP itself should be thread-safe, if there are bugs inside PHP itself
we try to fix them.

However: PHP links to tons of external libraries, some of them might
not be fully thread-safe or make assumptions.

An example for such assumptions, aside from obvious memory access
issues, is around the current working directory: In a threaded
environment all parallel handled requests are in the same process and
there can be only one "current working dir" (cwd) per process. We have
the VitualCWD to mitigate, but that doesn't control what happens inside
an external libraries code.

A list for that doesn't exist as verifying this is hard. Most
mainstream things should be thread-safe, while even some C standard
library features we use sometimes aren't thread-safe (i.e.  due to use
of the global "errno" for returning error codes, protecting these needs
tons of locks which can limit scalability a lot ... if there are
thread-safe/re-entrable versions of such library calls exist we should
use them, not using them is a bug on our side, maybe since the thread-
safe API is "new" compared to our implementation or too system-
specific)

On the more philosophical reasoning PHP tries to isolate requests from
each other. Whatever happens in one request should not impact the other
requests. By relying on the operating system's process isolation we
gain a boundary. A known example where this fails are some forms of
recursion where we still reach stack overflows (way less situations
than in the past, but still) On stackoverflow the operating system will
terminate the process. In a per-process model this impacts only the
failing request, in a threaded model hits all parallel threads. (and
eventually the complete server, whereas in a per-process model one
always has a super process watching and restating children)

There had been attempts at different times by different folks Zend, Sun
Microsystems, Microsoft etc. to improve this, but it never got
mainstream trust.

A bit different is having threads inside a single script run, there
also has some work been done (-> pthreads) but often people decide to
offload longer running tasks to external systems (microservices?)
instead of building a single large system. So that also didn't go
mainstream (while there still are use cases)

johannes


--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Re: AW: transparent mode -> chksum incorrect

0
0
haproxy -vv
HA-Proxy version 1.8.4-1deb90d 2018/02/08
Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = linux26
  CPU     = generic
  CC      = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label
  OPTIONS = USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
        [SPOE] spoe
        [COMP] compression
        [TRACE] trace

00000017:fe_frontend_pool_proxy_172_17_232_232_3128.accept(0005)=000d from [172.17.232.233:54117] ALPN=<none>
00000018:fe_frontend_pool_proxy_172_17_232_232_3128.accept(0005)=0027 from [172.17.232.233:54118] ALPN=<none>
00000017:bk_pool_proxy_172_17_232_232_3128.clicls[adfd:adfd]
00000017:bk_pool_proxy_172_17_232_232_3128.closed[adfd:adfd]
00000018:bk_pool_proxy_172_17_232_232_3128.clicls[adfd:adfd]
00000018:bk_pool_proxy_172_17_232_232_3128.closed[adfd:adfd]
00000019:fe_frontend_pool_proxy_172_17_232_232_3128.accept(0005)=000d from [172.17.232.233:54119] ALPN=<none>
0000001a:fe_frontend_pool_proxy_172_17_232_232_3128.accept(0005)=0027 from [172.17.232.233:54120] ALPN=<none>


And the question remains.Why is not working from a client from the same IP class with 172.17.232.x. ?

thanks

--Marius


==============================================================

On Thursday, March 22, 2018, 1:07:09 PM GMT+2, Mathias Weiersmüller <matti@weiersmueller.com> wrote:

Hi Marius,

your NIC is probably doing the TCP checksum calculation (called « TCP offloading»). The TCP/IP stacks therefore sends all outbound TCP packets with the same dummy checksum (in your case: 0x2a21) to the NIC driver. This saves some CPU cycles.

Check your TCP offloading settings using:
/sbin/ethtool -k eth0

Disable TCP Offloading using:
sudo /sbin/ethtool -K eth0 tx off rx off

In other words: You have no problem, it's just tcpdump which thinks there is a TCP checksum problem. If you want to work around this, use the following tcpdump option:
-K
      --dont-verify-checksums
              Don't attempt to verify IP, TCP, or UDP checksums.  This is useful for interfaces that perform some or all
              of those checksum calculation in hardware; otherwise, all outgoing TCP checksums will be flagged as bad.

Cheers

Mathias

==============================================================

Von: matei marius <mat.marius@yahoo.com>
Gesendet: Donnerstag, 22. März 2018 11:50
An: HAproxy Mailing Lists <haproxy@formilux.org>
Betreff: transparent mode -> chksum incorrect


Hello
I'm  trying to configure haproxy in transparent mode using the configuration below:

The backend servers have as default gateway the haproxy IP (172.17.232.232)

frontend fe_frontend_pool_proxy_3128
        timeout client 30m
        mode tcp
        bind 172.17.232.232:3128 transparent
        default_backend bk_pool_proxy_3128

backend bk_pool_proxy_3128
        timeout server 30m
        timeout connect 5s
        mode tcp
        balance leastconn
        default-server inter 5s fall 3 rise 2 on-marked-down shutdown-sessions
        source 0.0.0.0 usesrc clientip
        server sibipd-wcg1 172.17.232.229:3128 check port 3128 inter 3s rise 3 fall 3
        server romapd-wcg2 172.17.32.80:3128 check port 3128 backup inter 3s rise 3 fall 3 weight 10 source 0.0.0.0
        option redispatch

I have these iptables rules on the HAProxy server
iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 111
iptables -t mangle -A DIVERT -j ACCEPT
ip rule add fwmark 111 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
    

This setup is working perfectly from any IP class other than 172.17.232.x.
        
When I try to access the service from the same IP class with haproxy I see the packets having incorrect checksum .

tcpdump -i eth0 -n  host 172.17.232.229 and host 172.17.232.233 -vv
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes


12:37:21.741935 IP (tos 0x0, ttl 64, id 63601, offset 0, flags [DF], proto TCP (6), length 60)
    172.17.232.233.34012 > 172.17.232.229.3128: Flags , cksum 0x2a21 (incorrect -> 0xf5a2), seq 111508051, win 29200, options [mss 1460,sackOK,TS val 573276706 ecr 0,nop,wscale 7], length 0
12:37:21.743005 IP (tos 0x0, ttl 64, id 53770, offset 0, flags [DF], proto TCP (6), length 60)
    172.17.232.233.34014 > 172.17.232.229.3128: Flags , cksum 0x2a21 (incorrect -> 0xdbe0), seq 1250971688, win 29200, options [mss 1460,sackOK,TS val 573276706 ecr 0,nop,wscale 7], length 0

What am I doing wrong?    
    
Thanks
Marius

Re: [PHP-DEV] [RFC] [DISCUSSION] Improve null-coalescing operator (??) adding empty check (??:)

0
0
This is a RFC karma request for my wiki account.

I want to create a RFC with my proposal: Improve null-coalescing
operator (??) adding empty check (??:)

First list message is: http://news.php.net/php.internals/101606

The main idea is simplify "empty" check on non existing keys or object
attributes. Same as "?:" but also checking undefined.

Current check:

$value = empty($user->thisOptionalAttributeCanBeEmptyOrNotExists) ?
'without value' : $user->thisOptionalAttributeCanBeEmptyOrNotExists;

New feature:

$value = $user->thisOptionalAttributeCanBeEmptyOrNotExists ??: 'without
value';

I think that could be very usefull on inline "exists" + "not empty"
checks with a more clear code.

It's possible?

Thanks,
Lito.

On 17/01/18 19:47, Lito wrote:
> On 17/01/18 19:43, Andrey Andreev wrote:
>> Hi,
>>
>>
>> On Wed, Jan 17, 2018 at 8:28 PM, Lito <info@eordes.com> wrote:
>>> No $foo ?: 'default' , it's only equivalent to (isset($foo) && $foo)
>>> ? $foo
>>> : 'default' if $foo exists.
>>>
>>> Also PHP has added ?? as null-coalescing operator that works with
>>> undefined
>>> variables/attributes/keys, my proposal is an improvement over this one.
>>>
>>> I don't want to endorse usage of undefined variables, can be used in
>>> a large
>>> set of situations, like object attributes, array keys, etc...
>>>
>>> Anyway thanks for your feedback.
>>> Lito.
>>>
>> There is a shorter version:
>>
>>      empty($foo) ? 'default' : $foo;
>>
>> And I think that's quite convenient for the few use cases it has
>> (refer to Nikita's reply).
>>
>> Cheers,
>> Andrey.
>>
> Yes, I think that:
>
> $foo = $foo ??: 'default';
>
> Is more clear and with less code than:
>
> $foo = empty($foo) ? 'default' : $foo;
>
> As ?? does.
>
> Regards,
> Lito.
>
>


--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Re: [PHP-DEV] Re:[PHP-DEV] Weird destructor call order on stream wrappers

0
0
Hello,

Am 22.03.2018 um 06:59 schrieb CHU Zhaowei:
> There is a related bug report: https://bugs.php.net/bug.php?id=75931

Thank you for pointing out this bug report. It is about stream filters
(extending the php_user_filter class) and not about stream wrappers
which seem to have similar issues.

> It also point out that the construcor method has been ignored.

That does not apply to stream wrappers, the constructor is called just fine.

Greets
Dennis

--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Re: h2 sending RSTs in response to RSTs

0
0
Hi,

On Thu, Jan 25, 2018 at 04:35:58AM +0800, klzgrad wrote:
> Hi,
>
> I patched Chromium to close the stream immediately after seeing END_STREAM.
>
> In testing, Chromium sends an RST (CANCEL) for this, but HAProxy
> replies with an RST (STREAM_CLOSED). This is a MUST NOT (though only a
> nuisance for me as Chromium will print warnings for it.):
>
> > To avoid looping, an endpoint MUST NOT send a RST_STREAM in response to a RST_STREAM frame.
>
> I put some logging points. This branch is being triggered:
>
> if (h2s->flags & H2_SF_RST_RCVD) {
>
> During this, h2s is h2_closed_stream, and the "closed" stream was
> previously deleted from h2_detach.

I'm sorry for the delay, but this was not lost. I've now addressed it.
Thanks for the detailed analysis, it helped me spot the cause. It was
indeed the fact that we report an error for a frame received on a
closed stream, but we must avoid responding in this case since the
frame in question was itself a reset.

Cheers,
Willy

Re: AW: transparent mode -> chksum incorrect

0
0
On Thu, Mar 22, 2018 at 01:15:26PM +0000, matei marius wrote:
> haproxy -vv
> HA-Proxy version 1.8.4-1deb90d 2018/02/08
> Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>
>
> Build options :
>   TARGET  = linux26
>   CPU     = generic
>   CC      = gcc
>   CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label
>   OPTIONS = USE_PCRE=1
>
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>
> Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND
> Encrypted password support via crypt(3): yes
> Built with PCRE version : 8.32 2012-11-30
> Running on PCRE version : 8.32 2012-11-30
> PCRE library supports JIT : no (USE_PCRE_JIT not set)
> Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
> Compression algorithms supported : identity("identity")
> Built with network namespace support.
>
> Available polling systems :
>       epoll : pref=300,  test result OK
>        poll : pref=200,  test result OK
>      select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
>
> Available filters :
>         [SPOE] spoe
>         [COMP] compression
>         [TRACE] trace
>
> 00000017:fe_frontend_pool_proxy_172_17_232_232_3128.accept(0005)=000d from [172.17.232.233:54117] ALPN=<none>
> 00000018:fe_frontend_pool_proxy_172_17_232_232_3128.accept(0005)=0027 from [172.17.232.233:54118] ALPN=<none>
> 00000017:bk_pool_proxy_172_17_232_232_3128.clicls[adfd:adfd]
> 00000017:bk_pool_proxy_172_17_232_232_3128.closed[adfd:adfd]
> 00000018:bk_pool_proxy_172_17_232_232_3128.clicls[adfd:adfd]
> 00000018:bk_pool_proxy_172_17_232_232_3128.closed[adfd:adfd]
> 00000019:fe_frontend_pool_proxy_172_17_232_232_3128.accept(0005)=000d from [172.17.232.233:54119] ALPN=<none>
> 0000001a:fe_frontend_pool_proxy_172_17_232_232_3128.accept(0005)=0027 from [172.17.232.233:54120] ALPN=<none>

As Mathias said, the problem is unrelated to haproxy but it's the way
the network stack works in modern systems : checksums are offloaded to
the hardware so the buffers where tcpdump finds the packets have no
valid checksum yet (usually 0 but will depend on the OS) and the checks
indicate they are invalid.

> And the question remains.Why is not working from a client from the same IP class with 172.17.232.x. ?

This is normal, your server wants to respond directly to the client and
fails. This is a well-known problem in transparent proxy environments
as well as destination-nat ones (eg: LVS). You must always ensure that
the server will route the return traffic to the client through the load
balancer. If the client comes from the same network as the server, the
server believes it's on its local net and will route directly without
passing back via the load balancer. If your client on this network has
a fixed address, you can add a host route on your servers to join the
client via the load balancer. You will also likely have to disable the
emission of ICMP redirects on the LB (as it will receive a packet for
a destination belonging to the same LAN it received it from).

Usually people avoid transparent proxying for all these painful reasons,
and it's only enabled for traffic coming from the internet, never for
local systems.

Hoping this helps,
Willy

Re: [PHP] PHP 7.1.16RC1 ready for testing

0
0
On Thu, Mar 22, 2018 at 01:35:13PM +0100, Christoph M. Becker wrote:
> On 22.03.2018 at 05:39, The Doctor wrote:
>
> > On Thu, Mar 15, 2018 at 10:07:11PM +0100, Joe Watkins wrote:
> >> Afternoon everyone,
> >>
> >> PHP 7.1.16RC1 is available for testing at https://downloads.php.net/~ab
> >>
> >> Please report any bugs you find.
> >
> > Major bus and this will effect all upcoming version of PHP!
> >
> > Oniguruma 6.8.1 is now incompable.
> >
> > One of the structs is giving major grief.
>
> Already fixed with
> http://git.php.net/?p=php-src.git;a=commit;h=4072b2787074ee8e247a6639585b49e10c5a55fe.
>

Applied the patch
But still getting the following:

/usr/local/include/oniguruma.h:674:1: warning: typedef requires a name
[-Wmissing-declarations]
typedef struct re_pattern_buffer OnigRegexType;
^~~~~~~
/usr/local/include/oniguruma.h:674:34: warning: type specifier missing, defaults
to 'int' [-Wimplicit-int]
typedef struct re_pattern_buffer OnigRegexType;
^
/usr/local/include/oniguruma.h:675:9: error: unknown type name 'OnigRegexType'
typedef OnigRegexType* OnigRegex;
^
/usr/local/include/oniguruma.h:678:11: error: unknown type name 'OnigRegexType'
typedef OnigRegexType regex_t;
> --
> Christoph M. Becker
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>

--
Member - Liberal International This is doctor@@nl2k.ab.ca Ici doctor@@nl2k.ab.ca
Yahweh, Queen & country!Never Satan President Republic!Beware AntiChrist rising!
https://www.empire.kred/ROOTNK?t=94a1f39b Look at Psalms 14 and 53 on Atheism
Always seek out the seed of triumph in every adversity. -Og Mandino

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

[PHP-DEV] Re: [PHP] PHP 7.1.16RC1 ready for testing

0
0
On Thu, Mar 22, 2018 at 01:35:13PM +0100, Christoph M. Becker wrote:
> On 22.03.2018 at 05:39, The Doctor wrote:
>
> > On Thu, Mar 15, 2018 at 10:07:11PM +0100, Joe Watkins wrote:
> >> Afternoon everyone,
> >>
> >> PHP 7.1.16RC1 is available for testing at https://downloads.php.net/~ab
> >>
> >> Please report any bugs you find.
> >
> > Major bus and this will effect all upcoming version of PHP!
> >
> > Oniguruma 6.8.1 is now incompable.
> >
> > One of the structs is giving major grief.
>
> Already fixed with
> http://git.php.net/?p=php-src.git;a=commit;h=4072b2787074ee8e247a6639585b49e10c5a55fe.
>

Applied the patch
But still getting the following:

/usr/local/include/oniguruma.h:674:1: warning: typedef requires a name
[-Wmissing-declarations]
typedef struct re_pattern_buffer OnigRegexType;
^~~~~~~
/usr/local/include/oniguruma.h:674:34: warning: type specifier missing, defaults
to 'int' [-Wimplicit-int]
typedef struct re_pattern_buffer OnigRegexType;
^
/usr/local/include/oniguruma.h:675:9: error: unknown type name 'OnigRegexType'
typedef OnigRegexType* OnigRegex;
^
/usr/local/include/oniguruma.h:678:11: error: unknown type name 'OnigRegexType'
typedef OnigRegexType regex_t;
> --
> Christoph M. Becker
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>

--
Member - Liberal International This is doctor@@nl2k.ab.ca Ici doctor@@nl2k.ab.ca
Yahweh, Queen & country!Never Satan President Republic!Beware AntiChrist rising!
https://www.empire.kred/ROOTNK?t=94a1f39b Look at Psalms 14 and 53 on Atheism
Always seek out the seed of triumph in every adversity. -Og Mandino

--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

[PHP-DEV] Re: [PHP] PHP 7.1.16RC1 ready for testing

0
0
On 22.03.2018 at 17:56, The Doctor wrote:

> On Thu, Mar 22, 2018 at 01:35:13PM +0100, Christoph M. Becker wrote:
>
>> Already fixed with
>> http://git.php.net/?p=php-src.git;a=commit;h=4072b2787074ee8e247a6639585b49e10c5a55fe.
>
> Applied the patch
> But still getting the following:
>
> /usr/local/include/oniguruma.h:674:1: warning: typedef requires a name
> [-Wmissing-declarations]
> typedef struct re_pattern_buffer OnigRegexType;
> ^~~~~~~
> /usr/local/include/oniguruma.h:674:34: warning: type specifier missing, defaults
> to 'int' [-Wimplicit-int]
> typedef struct re_pattern_buffer OnigRegexType;
> ^
> /usr/local/include/oniguruma.h:675:9: error: unknown type name 'OnigRegexType'
> typedef OnigRegexType* OnigRegex;
> ^
> /usr/local/include/oniguruma.h:678:11: error: unknown type name 'OnigRegexType'
> typedef OnigRegexType regex_t;

That does not seem to be related to PHP, since the errors occur in the
external library. FWIW, Oniguruma 6.8.1 appears to build fine in our
master. Anyhow, if you find bugs in PHP, https://bugs.php.net/ is the
proper place to report them. :)

--
Christoph M. Becker

--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Re: actconn issue

0
0
So I tried debugging this a little further; in listener.c I added some
debug output around the sections where actconn is in-/decreased.
As light test traffic is running, I can see the following (took a few
hours):

(...)
Increased actconn to 1 (in line 611)
Decreased actconn to 0 (in line 682)
Decreased actconn to -1 (in line 627)

Out of many thousand occurences of modifying actconn, there's a single
instance of actconn getting decreased in line 627, so apparently the
accept returned with an unexpected value. Unfortunately, I missed
getting the output of "show errors" before terminating the process.
Hints on how to debug this better would be appreciated.

Regards,
J.

[PHP-DEV] Re: [PHP] PHP 7.1.16RC1 ready for testing

0
0
On Thu, Mar 22, 2018 at 08:11:10PM +0100, Christoph M. Becker wrote:
> On 22.03.2018 at 17:56, The Doctor wrote:
>
> > On Thu, Mar 22, 2018 at 01:35:13PM +0100, Christoph M. Becker wrote:
> >
> >> Already fixed with
> >> http://git.php.net/?p=php-src.git;a=commit;h=4072b2787074ee8e247a6639585b49e10c5a55fe.
> >
> > Applied the patch
> > But still getting the following:
> >
> > /usr/local/include/oniguruma.h:674:1: warning: typedef requires a name
> > [-Wmissing-declarations]
> > typedef struct re_pattern_buffer OnigRegexType;
> > ^~~~~~~
> > /usr/local/include/oniguruma.h:674:34: warning: type specifier missing, defaults
> > to 'int' [-Wimplicit-int]
> > typedef struct re_pattern_buffer OnigRegexType;
> > ^
> > /usr/local/include/oniguruma.h:675:9: error: unknown type name 'OnigRegexType'
> > typedef OnigRegexType* OnigRegex;
> > ^
> > /usr/local/include/oniguruma.h:678:11: error: unknown type name 'OnigRegexType'
> > typedef OnigRegexType regex_t;
>
> That does not seem to be related to PHP, since the errors occur in the
> external library. FWIW, Oniguruma 6.8.1 appears to build fine in our
> master. Anyhow, if you find bugs in PHP, https://bugs.php.net/ is the
> proper place to report them. :)
>

Will report before midnight.

But heads up . Some others might complain.

> --
> Christoph M. Becker

--
Member - Liberal International This is doctor@@nl2k.ab.ca Ici doctor@@nl2k.ab.ca
Yahweh, Queen & country!Never Satan President Republic!Beware AntiChrist rising!
https://www.empire.kred/ROOTNK?t=94a1f39b Look at Psalms 14 and 53 on Atheism
Always seek out the seed of triumph in every adversity. -Og Mandino

--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Only compressed version of file on server , and supporting clients that don't send Accept-Encoding.

0
0
Hi,
We have only gzipped files stored on nginx and need to serve client that :
A) Support gzip transfer encoding (> 99% of the clients). They send
Accept-Encoding: gzip header...
B) < 1% of the clients that don't support transfer encoding. The don't send
Accept-Encoding header.

There is ample CPU in the nginx servers to support clients of type B). But
I am unable to figure out a config/reasonable script
to help us serve these clients.

Clients of type A) are served with the following config.
--- Working config that appends .gz in the try_files ----
location /compressed_files/ {
add_header Content-Encoding "gzip";
expires 48h;
add_header Cache-Control private;
try_files $uri.gz @lua_script_for_missing_file;
}


----- Not working config with gunzip on; likely because gunzip filter
runs before add_header?

location /compressed_files/ {
add_header Content-Encoding "gzip";

expires 48h;
add_header Cache-Control private;
*# gunzip on fails to uncompress likely because it does not notice the
add_header directive.*
* gunzip on;*
* gzip_proxied any;*
try_files $uri.gz @lua_script_for_missing_file;
}


I would appreciate any pointers on how to do this. I may be missing some
obvious configuration for such case.
We did discuss keeping both unzipped and zipped version on the server, but
unfortunately that is unlikely to happen.

Thanks,
Hemant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Only compressed version of file on server , and supporting clients that don't send Accept-Encoding.

0
0
Hello!

On Thu, Mar 22, 2018 at 05:00:20PM -0700, Hemant Bist wrote:

> Hi,
> We have only gzipped files stored on nginx and need to serve client that :
> A) Support gzip transfer encoding (> 99% of the clients). They send
> Accept-Encoding: gzip header...
> B) < 1% of the clients that don't support transfer encoding. The don't send
> Accept-Encoding header.
>
> There is ample CPU in the nginx servers to support clients of type B). But
> I am unable to figure out a config/reasonable script
> to help us serve these clients.
>
> Clients of type A) are served with the following config.
> --- Working config that appends .gz in the try_files ----
> location /compressed_files/ {
> add_header Content-Encoding "gzip";
> expires 48h;
> add_header Cache-Control private;
> try_files $uri.gz @lua_script_for_missing_file;
> }
>
>
> ----- Not working config with gunzip on; likely because gunzip filter
> runs before add_header?
>
> location /compressed_files/ {
> add_header Content-Encoding "gzip";
>
> expires 48h;
> add_header Cache-Control private;
> *# gunzip on fails to uncompress likely because it does not notice the
> add_header directive.*
> * gunzip on;*
> * gzip_proxied any;*
> try_files $uri.gz @lua_script_for_missing_file;
> }
>
>
> I would appreciate any pointers on how to do this. I may be missing some
> obvious configuration for such case.
> We did discuss keeping both unzipped and zipped version on the server, but
> unfortunately that is unlikely to happen.

Try this instead:

location /compressed_files/ {
gzip_static always;
gunzip on;
}

See documentation here for additional details:

http://nginx.org/r/gzip_static
http://nginx.org/r/gunzip

Note that you wan't be able to combine this with "try_files
$uri.gz ...", as this will change URI as seen by gzip_static and
will break it. If you want to fall back to a different location
when there is no file, use "error_page 404 ..." instead.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Segmentation faults (1.8.4-de425f6 2018/02/26)

0
0
Hi,

we had two crashes yesterday within about 2 hours.

HA-Proxy version 1.8.4-de425f6 2018/02/26
Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>

Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-null-dereference -Wno-unused-label
OPTIONS = USE_LINUX_SPLICE=1 USE_LIBCRYPT=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0f 25 May 2017
Running on OpenSSL version : OpenSSL 1.1.0f 25 May 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace



root@66b9ab4204d8:/code# gdb /usr/local/sbin/haproxy core
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/local/sbin/haproxy...done.
[New LWP 10]

warning: .dynamic section for "/lib64/ld-linux-x86-64.so.2" is not at the expected address (wrong library or version mismatch?)
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/local/sbin/haproxy -f /etc/haproxy.cfg'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 __eb_delete (node=0x55dae9d8db30, node@entry=0x55dae8bdd230) at ebtree/ebtree.h:720
720 ebtree/ebtree.h: No such file or directory.
(gdb) bt
#0 __eb_delete (node=0x55dae9d8db30, node@entry=0x55dae8bdd230) at ebtree/ebtree.h:720
#1 eb_delete (node=node@entry=0x55dae9d8db30) at ebtree/ebtree.c:25
#2 0x000055dae7bc36f5 in eb32_delete (eb32=0x55dae9d8db30) at ebtree/eb32tree.h:106
#3 __task_unlink_wq (t=0x55dae9d8dad0) at include/proto/task.h:145
#4 task_unlink_wq (t=<optimized out>) at include/proto/task.h:153
#5 task_delete (t=<optimized out>) at include/proto/task.h:192
#6 process_stream (t=t@entry=0x55dae9d8dad0) at src/stream.c:2514
#7 0x000055dae7c3f792 in process_runnable_tasks () at src/task.c:229
#8 0x000055dae7bf2674 in run_poll_loop () at src/haproxy.c:2399
#9 run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:2461
#10 0x000055dae7b6cfea in main (argc=<optimized out>, argv=0x7ffcff36a218) at src/haproxy.c:3050




global
log /dev/log local0 warning
maxconn 50000
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
ssl-default-bind-options no-sslv3 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
ssl-default-server-options no-sslv3 no-tls-tickets

defaults
log global
mode http
timeout connect 3s
timeout client 30s
timeout server 120s
timeout tunnel 3600s
timeout http-keep-alive 1s
timeout http-request 15s
option http-server-close
option httplog
option forwardfor
errorfile 503 /config/503.html
errorfile 408 /dev/null

userlist httpauth
user foo bar

resolvers docker
nameserver docker 127.0.0.11:53
hold valid 2s

frontend http
bind 0.0.0.0:80
reqadd X-Forwarded-Proto:\ http

acl is_assets hdr_dom(host) -i ${ASSET_HOST}
use_backend varnish-backend if is_assets
default_backend phoenix-backend

frontend https
bind 0.0.0.0:443 ssl crt "/letsencrypt/certificates/${CERTIFICATE_NAME}.pem" alpn h2,http/1.1 no-sslv3
rspadd Strict-Transport-Security:\ max-age=31536000

# cowboy crashes when invalid headers are sent
# see https://github.com/ninenines/cowboy/issues/943
acl invalid_keepalive_header hdr(Connection) -i keep-alive\ Te
reqirep ^Connection:\ keep-alive\ Te Connection:\ keep-alive,\ Te if invalid_keepalive_header

acl invalid_keepalive_header_1 hdr(Connection) -i Te\ keep-alive
reqirep ^Connection:\ Te\ keep-alive Connection:\ keep-alive,\ Te if invalid_keepalive_header_1

reqadd X-Forwarded-Proto:\ https

acl is_assets hdr_dom(host) -i ${ASSET_HOST}
acl is_metrics hdr_dom(host) -i m.foo.com
acl is_graphs hdr_dom(host) -i g.foo.com
acl is_ci hdr_dom(host) -i c.foo.com

use_backend varnish-backend if is_assets
use_backend prometheus-backend if is_metrics
use_backend grafana-backend if is_graphs
use_backend ci-backend if is_ci
default_backend phoenix-backend

backend varnish-backend
server varnish varnish:80 resolvers docker init-addr libc,last,none check port 80 inter 200

backend phoenix-backend
option httpchk GET /status
server phoenix phoenix:4000 resolvers docker init-addr libc,last,none check inter 200

backend prometheus-backend
acl auth_ok http_auth(httpauth)
http-request auth realm httpauth unless auth_ok
server prometheus prometheus:9090 resolvers docker init-addr last,none check port 9090

backend grafana-backend
server grafana grafana:3000 resolvers docker init-addr last,none check port 3000

backend ci-backend
server drone-server drone-server:8000 resolvers docker init-addr last,none check port 8000

Re: Segmentation faults (1.8.4-de425f6 2018/02/26)

0
0
We've been experiencing crashes too, with all 1.8 versions - currently
using 1.8.4 from PPA.
We noticed that disabling h2 prevents crashes.



Med venlig hilsen


*Peter Lindegaard Hansen*

*Softwareudvikler / Partner*

Telefon: +45 96 500 300 | Direkte: 69 14 97 04 | Email: plh@tigermedia.dk
Tiger Media A/S | Gl. Gugvej 17C | 9000 Aalborg | Web: www.tigermedia.dk

For supportspørgsmål kontakt os da på support@tigermedia.dk eller på tlf.
96 500 300
og din henvendelse vil blive besvaret af første ledige medarbejder.

2018-03-23 10:09 GMT+01:00 Holger Amann <holger@fehu.org>:

> Hi,
>
> we had two crashes yesterday within about 2 hours.
>
> HA-Proxy version 1.8.4-de425f6 2018/02/26
> Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>
>
> Build options :
> TARGET = linux2628
> CPU = generic
> CC = gcc
> CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
> -fwrapv -Wno-null-dereference -Wno-unused-label
> OPTIONS = USE_LINUX_SPLICE=1 USE_LIBCRYPT=1 USE_ZLIB=1 USE_OPENSSL=1
> USE_PCRE=1
>
> Default settings :
> maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>
> Built with OpenSSL version : OpenSSL 1.1.0f 25 May 2017
> Running on OpenSSL version : OpenSSL 1.1.0f 25 May 2017
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
> Encrypted password support via crypt(3): yes
> Built with multi-threading support.
> Built with PCRE version : 8.39 2016-06-14
> Running on PCRE version : 8.39 2016-06-14
> PCRE library supports JIT : no (USE_PCRE_JIT not set)
> Built with zlib version : 1.2.8
> Running on zlib version : 1.2.8
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Built with network namespace support.
>
> Available polling systems :
> epoll : pref=300, test result OK
> poll : pref=200, test result OK
> select : pref=150, test result OK
> Total: 3 (3 usable), will use epoll.
>
> Available filters :
> [SPOE] spoe
> [COMP] compression
> [TRACE] trace
>
>
>
> root@66b9ab4204d8:/code# gdb /usr/local/sbin/haproxy core
> GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
> Copyright (C) 2016 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.
> html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law. Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-linux-gnu".
> Type "show configuration" for configuration details.
> For bug reporting instructions, please see:
> http://www.gnu.org/software/gdb/bugs/.
> Find the GDB manual and other documentation resources online at:
> http://www.gnu.org/software/gdb/documentation/.
> For help, type "help".
> Type "apropos word" to search for commands related to "word"...
> Reading symbols from /usr/local/sbin/haproxy...done.
> [New LWP 10]
>
> warning: .dynamic section for "/lib64/ld-linux-x86-64.so.2" is not at the
> expected address (wrong library or version mismatch?)
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1"..
> Core was generated by `/usr/local/sbin/haproxy -f /etc/haproxy.cfg'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0 __eb_delete (node=0x55dae9d8db30, node@entry=0x55dae8bdd230) at
> ebtree/ebtree.h:720
> 720 ebtree/ebtree.h: No such file or directory.
> (gdb) bt
> #0 __eb_delete (node=0x55dae9d8db30, node@entry=0x55dae8bdd230) at
> ebtree/ebtree.h:720
> #1 eb_delete (node=node@entry=0x55dae9d8db30) at ebtree/ebtree.c:25
> #2 0x000055dae7bc36f5 in eb32_delete (eb32=0x55dae9d8db30) at
> ebtree/eb32tree.h:106
> #3 __task_unlink_wq (t=0x55dae9d8dad0) at include/proto/task.h:145
> #4 task_unlink_wq (t=<optimized out>) at include/proto/task.h:153
> #5 task_delete (t=<optimized out>) at include/proto/task.h:192
> #6 process_stream (t=t@entry=0x55dae9d8dad0) at src/stream.c:2514
> #7 0x000055dae7c3f792 in process_runnable_tasks () at src/task.c:229
> #8 0x000055dae7bf2674 in run_poll_loop () at src/haproxy.c:2399
> #9 run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:2461
> #10 0x000055dae7b6cfea in main (argc=<optimized out>, argv=0x7ffcff36a218)
> at src/haproxy.c:3050
>
>
>
>
> global
> log /dev/log local0 warning
> maxconn 50000
> tune.ssl.default-dh-param 2048
> ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
> ssl-default-bind-options no-sslv3 no-tls-tickets
> ssl-default-server-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
> ssl-default-server-options no-sslv3 no-tls-tickets
>
> defaults
> log global
> mode http
> timeout connect 3s
> timeout client 30s
> timeout server 120s
> timeout tunnel 3600s
> timeout http-keep-alive 1s
> timeout http-request 15s
> option http-server-close
> option httplog
> option forwardfor
> errorfile 503 /config/503.html
> errorfile 408 /dev/null
>
> userlist httpauth
> user foo bar
>
> resolvers docker
> nameserver docker 127.0.0.11:53
> hold valid 2s
>
> frontend http
> bind 0.0.0.0:80
> reqadd X-Forwarded-Proto:\ http
>
> acl is_assets hdr_dom(host) -i ${ASSET_HOST}
> use_backend varnish-backend if is_assets
> default_backend phoenix-backend
>
> frontend https
> bind 0.0.0.0:443 ssl crt "/letsencrypt/certificates/${CERTIFICATE_NAME}..pem" alpn h2,http/1.1 no-sslv3
> rspadd Strict-Transport-Security:\ max-age=31536000
>
> # cowboy crashes when invalid headers are sent
> # see https://github.com/ninenines/cowboy/issues/943
> acl invalid_keepalive_header hdr(Connection) -i keep-alive\ Te
> reqirep ^Connection:\ keep-alive\ Te Connection:\ keep-alive,\ Te if invalid_keepalive_header
>
> acl invalid_keepalive_header_1 hdr(Connection) -i Te\ keep-alive
> reqirep ^Connection:\ Te\ keep-alive Connection:\ keep-alive,\ Te if invalid_keepalive_header_1
>
> reqadd X-Forwarded-Proto:\ https
>
> acl is_assets hdr_dom(host) -i ${ASSET_HOST}
> acl is_metrics hdr_dom(host) -i m.foo.com
> acl is_graphs hdr_dom(host) -i g.foo.com
> acl is_ci hdr_dom(host) -i c.foo.com
>
> use_backend varnish-backend if is_assets
> use_backend prometheus-backend if is_metrics
> use_backend grafana-backend if is_graphs
> use_backend ci-backend if is_ci
> default_backend phoenix-backend
>
> backend varnish-backend
> server varnish varnish:80 resolvers docker init-addr libc,last,none check port 80 inter 200
>
> backend phoenix-backend
> option httpchk GET /status
> server phoenix phoenix:4000 resolvers docker init-addr libc,last,none check inter 200
>
> backend prometheus-backend
> acl auth_ok http_auth(httpauth)
> http-request auth realm httpauth unless auth_ok
> server prometheus prometheus:9090 resolvers docker init-addr last,none check port 9090
>
> backend grafana-backend
> server grafana grafana:3000 resolvers docker init-addr last,none check port 3000
>
> backend ci-backend
> server drone-server drone-server:8000 resolvers docker init-addr last,none check port 8000
>
>
>

proxy_cache_key case sensitivity question

0
0
The question is if these are cached as different files
http://myurl.html
http://MyUrl.html

I’m assuming that both would be different cache locations since the md5 would be different on each but ideally these would be the same cached file to prevent dupes.

My question is about the proxy_cache_key, when that is generated, is it case sensitive? We ran a quick test and it seemed to be true that changing the case in the URL created a new/different version of the page. If our test was accurate and this is how it works, then is there a way to make it so that the key used to generate the MD5 always uses a lower case string?

One possible solution is to install the module that changes strings to lower/upper and then wrap that around the string used for the key. But before I go down that path, I wanted to find out if I would be wasting my time.


___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[PHP] Error with composer

0
0
I am using PDO + composer and I have an error with some .php and with others not that I have not been able to solve

Fatal error: Uncaught Error: Class 'PostgreSQL\PostgreSQLPHPInsert' not found in /var/www/postgresqlphpconnect/indexinsert.php:11 Stack trace: #0 {main} thrown in /var/www/postgresqlphpconnect/indexinsert.php on line 11

Next I show my .php:

<?php
require 'vendor/autoload.php';

use PostgreSQL\Connection as Connection;
use PostgreSQL\PostgreSQLPHPInsert as PostgreSQLPHPInsert;

try {
// connect to the PostgreSQL database
$pdo = Connection::get()->connect();
//
$insertDemo = new PostgreSQLPHPInsert($pdo);

// insert a stock into the stocks table
$id = $insertDemo->insertStock('MSFT', 'Microsoft Corporation');
echo 'The stock has been inserted with the id ' . $id . '<br>';

// insert a list of stocks into the stocks table
$list = $insertDemo->insertStockList([
['symbol' => 'GOOG', 'company' => 'Google Inc.'],
['symbol' => 'YHOO', 'company' => 'Yahoo! Inc.'],
['symbol' => 'FB', 'company' => 'Facebook, Inc.'],
]);

foreach ($list as $id) {
echo 'The stock has been inserted with the id ' . $id . '<br>';
}
} catch (\PDOException $e) {
echo $e->getMessage();
}

The class of the connection file that "Connection" works without problems (use PostgreSQL \ Connection as Connection;) but use PostgreSQL \ PostgreSQLPHPInsert as PostgreSQLPHPInsert; I get the error that I copied above
This is my autoload.php

<?php

// autoload.php @generated by Composer

require_once __DIR__ . '/composer/autoload_real.php';

return ComposerAutoloaderInit4a8bb4024109306e38c15d9bc0c30d94::getLoader();

This is my class that I do the insert

/**
* Return all rows in the stocks table
* @return array
*/
public function all() {
$stmt = $this->pdo->query('SELECT id, symbol, company '
. 'FROM stocks '
. 'ORDER BY symbol');
$stocks = [];
while ($row = $stmt->fetch(\PDO::FETCH_ASSOC)) {
$stocks[] = [
'id' => $row['id'],
'symbol' => $row['symbol'],
'company' => $row['company']
];
}
return $stocks;
}

I have other php with other classes that do not give me error and I do not see difference with this





--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Viewing all 23908 articles
Browse latest View live




Latest Images