HAProxy running on Kubernetes and output a lot of BADREQ logs
When I was setting up HAProxy running on our Kubernetes cluster, I defined the log format as JSON and I got a lot of BADREQ logs with status code 400 as follows.
1{
2 "conn": {
3 "act": 7,
4 "fe": 7,
5 "be": 0,
6 "srv": 0
7 },
8 "queue": {
9 "backend": 0,
10 "srv": 0
11 },
12 "time": {
13 "tq": -1,
14 "tw": -1,
15 "tc": -1,
16 "tr": -1,
17 "tt": 0
18 },
19 "termination_state": "CR--",
20 "retries": 0,
21 "network": {
22 "client_ip": "172.31.79.179",
23 "client_port": 63740,
24 "frontend_ip": "172.31.76.184",
25 "frontend_port": 80
26 },
27 "ssl": {
28 "version": "-",
29 "ciphers": "-"
30 },
31 "request": {
32 "method": "<BADREQ>",
33 "uri": "-",
34 "protocol": "<BADREQ>",
35 "header": {
36 "host": "-",
37 "xforwardfor": "-",
38 "referer": "-"
39 }
40 },
41 "name": {
42 "backend": "https-in",
43 "frontend": "https-in",
44 "server": "<NOSRV>"
45 },
46 "response": {
47 "status_code": 400,
48 "header": {
49 "xrequestid": "-"
50 }
51 },
52 "bytes": {
53 "uploaded": 0,
54 "read": 245
55 }
56}
As a result of researching these error logs, I realize that the cause of these errors is the liveness probe of Kubernetes like this.
1 livenessProbe:
2 tcpSocket:
3 port: 80
4 initialDelaySeconds: 15
5 periodSeconds: 5
I’m not sure but I think health-check by TCP not HTTP will raise the same logs.
Finally, I solved this problem by adding "dontlognull" option like this.
1defaults
2 mode http
3 option dontlognull
4 :
5 :