news 2026/4/18 9:43:55

K8S 中使用 YAML 安装 ECK

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
K8S 中使用 YAML 安装 ECK

Kubernetes 是目前最受欢迎的容器编排技术,越来越多的应用开始往 Kubernetes 中迁移。Kubernetes 现有的 ReplicaSet、Deployment、Service 等资源对象已经可以满足无状态应用对于自动扩缩容、负载均衡等基本需求。但是对于有状态的、分布式的应用,通常拥有各自的一套模型定义规范,例如 Prometheus,Etcd,Zookeeper,Elasticsearch 等等。部署这些分布式应用往往需要熟悉特定领域的知识,并且在扩缩容和升级时需要考虑如何保证应用服务的可用性等问题。为了简化有状态、分布式应用的部署,Kubernetes Operator 应运而生。

Kubernetes Operator 是一种特定的应用控制器,通过 CRD(Custom Resource Definitions,自定义资源定义)扩展 Kubernetes API 的功能,可以用它来创建、配置和管理特定的有状态应用,而不需要直接去使用 Kubernetes 中最原始的一些资源对象,比如 Pod,Deployment,Service 等等。

Elastic Cloud on Kubernetes(ECK) 是其中的一种 Kubernetes Operator,方便我们管理 Elastic Stack 家族中的各种组件,例如 Elasticsearch,Kibana,APM,Beats 等等。比如只需要定义一个 Elasticsearch 类型的 CRD 对象,ECK 就可以帮助我们快速搭建出一套 Elasticsearch 集群。

使用create安装Elastic的自定义资源定义

[root@k8s-192-168-1-140 ~]# kubectl create -f https://download.elastic.co/downloads/eck/3.2.0/crds.yaml

customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/elasticsearchautoscalers.autoscaling.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created

customresourcedefinition.apiextensions.k8s.io/stackconfigpolicies.stackconfigpolicy.k8s.elastic.co created

[root@k8s-192-168-1-140 ~]#

使用kubectl apply 安装operator及其RBAC规则

[root@k8s-192-168-1-140 ~]# kubectl apply -f https://download.elastic.co/downloads/eck/3.2.0/operator.yaml

namespace/elastic-system created

serviceaccount/elastic-operator created

secret/elastic-webhook-server-cert created

configmap/elastic-operator created

clusterrole.rbac.authorization.k8s.io/elastic-operator created

clusterrole.rbac.authorization.k8s.io/elastic-operator-view created

clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created

clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created

service/elastic-webhook-server created

statefulset.apps/elastic-operator created

validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created

[root@k8s-192-168-1-140 ~]#

查看是否启动完成

[root@k8s-192-168-1-140 ~]kubectl get -n elastic-system pods

NAME READY STATUS RESTARTS AGE

elastic-operator-0 1/1 Running 0 8m38s

[root@k8s-192-168-1-140 ~]#

启动部署

Operator自动创建和管理Kubernetes资源,以实现Elasticsearch集群的期望状态。可能需要几分钟的时间才能创建所有资源并准备好使用群集。

cat <<EOF | kubectl apply -f -

apiVersion: elasticsearch.k8s.elastic.co/v1

kind: Elasticsearch

metadata:

name: quickstart

spec:

version: 9.2.2

nodeSets:

- name: default

count: 1

config:

node.store.allow_mmap: false

EOF

存储用量

创建时默认为1G存储空间,可以在创建时配置申明空间

cat <<EOF | kubectl apply -f -

apiVersion: elasticsearch.k8s.elastic.co/v1

kind: Elasticsearch

metadata:

name: quickstart

spec:

version: 9.2.2

nodeSets:

- name: default

count: 1

volumeClaimTemplates:

- metadata:

name: elasticsearch-data

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 5Gi

storageClassName: nfs-storage

config:

node.store.allow_mmap: false

EOF

查看部署状态

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get pod -A

NAMESPACE NAME READY STATUS RESTARTS AGE

default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-0 0/1 Pending 0 2m1s

elastic-system elastic-operator-0 1/1 Running 0 12m

kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d

kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d

kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d

kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d

kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d

[root@k8s-192-168-1-140 ~]#

查看日志

[root@k8s-192-168-1-140 ~]# kubectl logs -f quickstart-es-default-0

Defaulted container "elasticsearch" out of: elasticsearch, elastic-internal-init-filesystem (init), elastic-internal-suspend (init)

[root@k8s-192-168-1-140 ~]#

安装NFS动态挂载

[root@k8s-192-168-1-140 ~]# yum install nfs-utils -y

[root@k8s-192-168-1-140 ~]# mkdir /nfs

[root@k8s-192-168-1-140 ~]# vim /etc/exports

/nfs *(rw,sync,no_root_squash,no_subtree_check)

[root@k8s-192-168-1-140 ~]# systemctl restart rpcbind

[root@k8s-192-168-1-140 ~]# systemctl restart nfs-server

[root@k8s-192-168-1-140 ~]# systemctl enable rpcbind

[root@k8s-192-168-1-140 ~]# systemctl enable nfs-server

K8S 编写NFS供给

[root@k8s-192-168-1-140 ~]# vim nfs-storage.yaml

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# cat nfs-storage.yaml

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: nfs-storage

annotations:

storageclass.kubernetes.io/is-default-class: "true"

provisioner: k8s-sigs.io/nfs-subdir-external-provisioner

parameters:

archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: nfs-client-provisioner

labels:

app: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

spec:

replicas: 1

strategy:

type: Recreate

selector:

matchLabels:

app: nfs-client-provisioner

template:

metadata:

labels:

app: nfs-client-provisioner

spec:

serviceAccountName: nfs-client-provisioner

containers:

- name: nfs-client-provisioner

image: registry.cn-hangzhou.aliyuncs.com/chenby/nfs-subdir-external-provisioner:v4.0.2

# resources:

# limits:

# cpu: 10m

# requests:

# cpu: 10m

volumeMounts:

- name: nfs-client-root

mountPath: /persistentvolumes

env:

- name: PROVISIONER_NAME

value: k8s-sigs.io/nfs-subdir-external-provisioner

- name: NFS_SERVER

value: 192.168.1.140 ## 指定自己nfs服务器地址

- name: NFS_PATH

value: /nfs/ ## nfs服务器共享的目录

volumes:

- name: nfs-client-root

nfs:

server: 192.168.1.140

path: /nfs/

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: nfs-client-provisioner-runner

rules:

- apiGroups: [""]

resources: ["nodes"]

verbs: ["get", "list", "watch"]

- apiGroups: [""]

resources: ["persistentvolumes"]

verbs: ["get", "list", "watch", "create", "delete"]

- apiGroups: [""]

resources: ["persistentvolumeclaims"]

verbs: ["get", "list", "watch", "update"]

- apiGroups: ["storage.k8s.io"]

resources: ["storageclasses"]

verbs: ["get", "list", "watch"]

- apiGroups: [""]

resources: ["events"]

verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: run-nfs-client-provisioner

subjects:

- kind: ServiceAccount

name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

roleRef:

kind: ClusterRole

name: nfs-client-provisioner-runner

apiGroup: rbac.authorization.k8s.io

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: leader-locking-nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

rules:

- apiGroups: [""]

resources: ["endpoints"]

verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: leader-locking-nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

subjects:

- kind: ServiceAccount

name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default

roleRef:

kind: Role

name: leader-locking-nfs-client-provisioner

apiGroup: rbac.authorization.k8s.io

开始安装

[root@k8s-192-168-1-140 ~]# kubectl apply -f nfs-storage.yaml

storageclass.storage.k8s.io/nfs-storage created

deployment.apps/nfs-client-provisioner created

serviceaccount/nfs-client-provisioner created

clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created

clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created

role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

[root@k8s-192-168-1-140 ~]#

查看存储

[root@k8s-192-168-1-140 ~]# kubectl get storageclasses.storage.k8s.io

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE

nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 6h7m

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE

elasticsearch-data-quickstart-es-default-0 Bound pvc-2df832aa-1c54-4af6-8384-5e5c5f167445 5Gi RWO nfs-storage <unset> 39s

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE

pvc-2df832aa-1c54-4af6-8384-5e5c5f167445 5Gi RWO Delete Bound default/elasticsearch-data-quickstart-es-default-0 nfs-storage <unset> 43s

[root@k8s-192-168-1-140 ~]#

查看ES服务发现

[root@k8s-192-168-1-140 ~]# kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 81d

nginx NodePort 10.68.148.53 <none> 80:30330/TCP 81d

quickstart-es-default ClusterIP None <none> 9200/TCP 74s

quickstart-es-http ClusterIP 10.68.66.232 <none> 9200/TCP 75s

quickstart-es-internal-http ClusterIP 10.68.121.73 <none> 9200/TCP 75s

quickstart-es-transport ClusterIP None <none> 9300/TCP 75s

[root@k8s-192-168-1-140 ~]#

查看ES密码

[root@k8s-192-168-1-140 ~]# PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'

V3VPqwQMURTSg6zFYvVIsH13[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# curl -u "elastic:$PASSWORD" -k "https://10.68.66.232:9200"

{

"name" : "quickstart-es-default-0",

"cluster_name" : "quickstart",

"cluster_uuid" : "JNqGubnmSeao_LO-JmypHg",

"version" : {

"number" : "9.2.2",

"build_flavor" : "default",

"build_type" : "docker",

"build_hash" : "ed771e6976fac1a085affabd45433234a4babeaf",

"build_date" : "2025-11-27T08:06:51.614397514Z",

"build_snapshot" : false,

"lucene_version" : "10.3.2",

"minimum_wire_compatibility_version" : "8.19.0",

"minimum_index_compatibility_version" : "8.0.0"

},

"tagline" : "You Know, for Search"

}

[root@k8s-192-168-1-140 ~]#

安装Kibana服务

cat <<EOF | kubectl apply -f -

apiVersion: kibana.k8s.elastic.co/v1

kind: Kibana

metadata:

name: quickstart

spec:

version: 9.2.2

count: 1

elasticsearchRef:

name: quickstart

EOF

查看服务状态以及密码信息

[root@k8s-192-168-1-140 ~]# kubectl get kibana

NAME HEALTH NODES VERSION AGE

quickstart red 9.2.2 16s

[root@k8s-192-168-1-140 ~]#

# 查看密码

[root@k8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo

V3VPqwQMURTSg6zFYvVIsH13

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 81d

nginx NodePort 10.68.148.53 <none> 80:30330/TCP 81d

quickstart-es-default ClusterIP None <none> 9200/TCP 2m47s

quickstart-es-http ClusterIP 10.68.66.232 <none> 9200/TCP 2m48s

quickstart-es-internal-http ClusterIP 10.68.121.73 <none> 9200/TCP 2m48s

quickstart-es-transport ClusterIP None <none> 9300/TCP 2m48s

quickstart-kb-http ClusterIP 10.68.103.103 <none> 5601/TCP 24s

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get service quickstart-kb-http

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

quickstart-kb-http ClusterIP 10.68.103.103 <none> 5601/TCP 2m39s

[root@k8s-192-168-1-140 ~]#

开启访问

[root@k8s-192-168-1-140 ~] kubectl port-forward --address 0.0.0.0 service/quickstart-kb-http 5601

Forwarding from 0.0.0.0:5601 -> 5601

# 登录地址

https://192.168.1.140:5601/login

用户:

elastic

密码:

V3VPqwQMURTSg6zFYvVIsH13

修改ES副本数

cat <<EOF | kubectl apply -f -

apiVersion: elasticsearch.k8s.elastic.co/v1

kind: Elasticsearch

metadata:

name: quickstart

spec:

version: 9.2.2

nodeSets:

- name: default

count: 3

config:

node.store.allow_mmap: false

EOF

查看状态

[root@k8s-192-168-1-140 ~]# kubectl get pod -A

NAMESPACE NAME READY STATUS RESTARTS AGE

default nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h38m

default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h24m

elastic-system elastic-operator-0 1/1 Running 0 6h2m

kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d

kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d

kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d

kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d

kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d

[root@k8s-192-168-1-140 ~]# kubectl get pod -A -w

NAMESPACE NAME READY STATUS RESTARTS AGE

default nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h38m

default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h24m

elastic-system elastic-operator-0 1/1 Running 0 6h2m

kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d

kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d

kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d

kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d

kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-1 0/1 Pending 0 0s

default quickstart-es-default-1 0/1 Pending 0 0s

default quickstart-es-default-1 0/1 Pending 0 0s

default quickstart-es-default-1 0/1 Init:0/2 0 0s

default quickstart-es-default-1 0/1 Init:0/2 0 1s

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-es-default-1 0/1 Init:0/2 0 1s

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-es-default-1 0/1 Init:0/2 0 2s

default quickstart-es-default-1 0/1 Init:1/2 0 3s

default quickstart-es-default-1 0/1 PodInitializing 0 4s

default quickstart-es-default-1 0/1 Running 0 5s

default quickstart-es-default-1 1/1 Running 0 26s

default quickstart-es-default-2 0/1 Pending 0 0s

default quickstart-es-default-2 0/1 Pending 0 0s

default quickstart-es-default-2 0/1 Pending 0 0s

default quickstart-es-default-2 0/1 Init:0/2 0 0s

default quickstart-es-default-2 0/1 Init:0/2 0 1s

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-es-default-1 1/1 Running 0 30s

default quickstart-es-default-2 0/1 Init:0/2 0 2s

default quickstart-es-default-1 1/1 Running 0 30s

default quickstart-es-default-2 0/1 Init:0/2 0 2s

default quickstart-es-default-0 1/1 Running 0 5h51m

default quickstart-es-default-2 0/1 Init:1/2 0 3s

default quickstart-es-default-2 0/1 PodInitializing 0 4s

default quickstart-es-default-2 0/1 Running 0 5s

default quickstart-es-default-2 1/1 Running 0 33s

^C[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]#

[root@k8s-192-168-1-140 ~]# kubectl get pod -A -w

NAMESPACE NAME READY STATUS RESTARTS AGE

default nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h40m

default nginx-66686b6766-tdwt2 1/1 Running 2 (<invalid> ago) 81d

default quickstart-es-default-0 1/1 Running 0 5h53m

default quickstart-es-default-1 1/1 Running 0 2m19s

default quickstart-es-default-2 1/1 Running 0 111s

default quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h26m

elastic-system elastic-operator-0 1/1 Running 0 6h4m

kube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (<invalid> ago) 81d

kube-system calico-node-hpwvr 1/1 Running 2 (<invalid> ago) 81d

kube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (<invalid> ago) 81d

kube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (<invalid> ago) 81d

kube-system node-local-dns-nz4q7 1/1 Running 2 (<invalid> ago) 81d

卸载删除

# 删除所有命名空间中的所有Elastic资源

kubectl get namespaces --no-headers -o custom-columns=:metadata.name \

| xargs -n1 kubectl delete elastic --all -n

# 删除operator

kubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/operator.yaml

kubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/crds.yaml

版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/4/18 6:58:51

C 语言函数:从 0 到 链表封装 --> 一次真正理解“数据 + 行为”的过程

很多人学 C 语言时&#xff0c;都会在「函数、指针、结构体、链表」之间来回卡壳。 真正的难点并不是语法&#xff0c;而是不知道如何用函数去“组织数据的行为”。本文将从 C 函数最基础用法 出发&#xff0c;逐步引入 指针、结构体、动态内存&#xff0c;最终用 函数完整封装…

作者头像 李华
网站建设 2026/4/18 7:35:59

【完整源码+数据集+部署教程】乐器检测系统源码分享[一条龙教学YOLOV8标注好的数据集一键训练_70+全套改进创新点发刊_Web前端展示]

一、背景意义 随着人工智能技术的迅猛发展&#xff0c;计算机视觉在各个领域的应用日益广泛&#xff0c;尤其是在物体检测方面。物体检测技术不仅能够提高生产效率&#xff0c;还能为各类智能系统提供重要的视觉信息支持。在音乐教育、乐器制造和音乐表演等领域&#xff0c;乐器…

作者头像 李华
网站建设 2026/4/18 7:35:12

Z字形扫描ccf

一、Z 字形扫描规则总结&#xff08;非常关键&#xff09; 对于一个 n n 矩阵&#xff1a; 所有元素都位于若干条 副对角线 上 副对角线编号&#xff1a; d 行号 i 列号 j 范围&#xff1a;0 ~ 2n-2 扫描顺序&#xff1a; 按 d 0 → 2n-2 依次扫描 每条对角线的遍历方向固定…

作者头像 李华
网站建设 2026/4/18 7:26:51

《Python实战小课:爬虫工具场景——开启数据抓取之旅》导读

在信息爆炸的时代&#xff0c;数据就是宝贵的资源。爬虫工具作为获取数据的有效手段&#xff0c;在各个领域都发挥着重要作用。本章节聚焦于爬虫工具场景&#xff0c;涵盖行业资讯、学术文献摘要以及电商评价的爬取&#xff0c;旨在帮助大家掌握如何运用Python爬虫技术&#xf…

作者头像 李华
网站建设 2026/4/17 14:28:39

17、使用 psad 进行主动响应:权衡、配置与示例

使用 psad 进行主动响应:权衡、配置与示例 主动响应的权衡 自动响应攻击,例如生成破坏会话的流量或修改防火墙策略,并非毫无代价。攻击者可能很快会注意到与目标系统的 TCP 会话被终止,或者与目标的所有连接都被切断。最合理的推断是,某种主动响应机制已被部署来保护目标…

作者头像 李华