>>547
Hey, not sure if you use haproxy for that ratelimit or not, but you can solve the issue of too many requests breaking long threads by making the ratelimit per url path+IP instead of just IP. Of course that means an attacker could use a shotgun approach and attack many pages at once, so you should still have a ratelimit for IP or connection rate, but with a higher value. An example/haproxy config snippet below:
[code]
##### Track client by base32+src (Host header + URL path + src IP)
stick-table type binary len 8 size 100k expire 10s store http_req_rate(10s) #request rate over last 10 seconds
http-request track-sc0 base32+src
acl rate_abuse sc0_http_req_rate gt 50 #50 requests/10s (5 req/s) PER-PAGE
###### Deny if rate abuse
http-request deny deny_status 429 if rate_abuse
### Optional, some other ways to deny request instead of http-request deny
#http-request tarpit if rate_abuse
# will keep the connection busy but send no data until timeout,
# then respond with 500 server error to make bots think they took your site down.
# very effective against very dumb robots, moreso than http-request deny,
# but can force haproxy to keep an insane number of connections open while tarpitting,
# so make sure you also have a rule preventing too many concurrent connections per IP
#http-request silent-drop if rate_abuse
# similar to tarpit, but makes haproxy completely forget the connection without notifying the client.
# can handle even more traffic than tarpit, but may have issues with stateful firewalls not closing
# connections properly so beware.
[/code]