Redis

安装

1
2
3
4
5
6
7
8
9
10
11
yum install -y gcc g++ gcc-c++

wget redis-3.2.6.tar.gz
tar xzvf redis-3.2.6.tar.gz
cd redis-3.2.6
make

mkdir /usr/local/redis
cp src/redis-server /usr/local/redis/
cp src/redis-cli /usr/local/redis/
cp redis.conf /usr/local/redis/

启动

1
2
cd /usr/local/redis/
./redis-server redis.conf

查看版本

1
./redis-server -v

配置

编辑 redis.conf

1
2
3
4
5
6
7
8
9
10
11
后台进程
daemonize yes

取消仅本机访问限制
# bind 127.0.0.1

取消保护模式
protected-mode no

启用密码
requirepass 123456

常用命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
redis-cli -h host -p port -a password

keys

set key value

get key

exists key

del key [key.....]

type key

ttl key

llen key

config get maxclients

性能测试

1
2
3
4
5
redis-benchmark -q -n 100000

redis-benchmark -t set,get -n 100000 -q

redis-benchmark -h 127.0.0.1 -p 6379 -c 100 -n 1000000 -d 20

监控

1
2
3
redis-cli -p 7379 info clients

redis-cli -p 7379 client list | grep 192.168.128.11 | wc -l

参考

Apache Tomcat

TOC

历史

  • 6.0 2007-02-28 First Apache Tomcat release to support the Servlet 2.5, JSP 2.1, and EL 2.1 specifications.
  • 7.0 2011-01-14 First Apache Tomcat release to support the Servlet 3.0, JSP 2.2, EL 2.2, and WebSocket specifications.
  • 8.0 2014-06-25 First Apache Tomcat release to support the Servlet 3.1, JSP 2.3, and EL 3.0 specifications.
  • 8.5 2016-06-13 Adds support for HTTP/2, OpenSSL for JSSE, TLS virtual hosting and JASPIC 1.1. Created from Tomcat 9, following delays to Java EE 8.
  • 9.0 2018-01-18 First Apache Tomcat release to support the Servlet 4.0 specifications.
  • 10.0 First Apache Tomcat release to support the Servlet 5.0, JSP 3.0, EL 4.0, WebSocket 2.0 and Authentication 2.0 specifications.

参考

安装

1
2
3
4
5
cd ~/download
wget tomcat7.tar.gz
mkdir /usr/local/tomcat && cd /usr/local/tomcat
tar xzvf ~/download/tomcat7.tar.gz
mv apache-tomcat-7.0.103/ tomcat7

下载地址

运行

Maven Plugin

Apache Tomcat Maven Plugin

Exploded web application

Jetbrains

设置JVM

编辑 setenv.sh

Linux

1
2
3
4
5
6
# 设置使用的JVM
export JAVA_HOME=/usr/local/java/jdk1.8
# 设置运行内存
JAVA_OPTS="-server -Xms1g -Xmx1g -Djava.awt.headless=true"

chmod 775 setenv.sh

Windows

1
2
set JAVA_HOME=/usr/local/java/jdk1.8
set JAVA_OPTS=-server -Xms1g -Xmx1g

配置

server.xml 基本配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<?xml version='1.0' encoding='utf-8'?>
<Server port="8005" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.startup.VersionLoggerListener" />
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<Listener className="org.apache.catalina.core.JasperListener" />
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

<Service name="Catalina">
<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="utf-8"/>
<Engine name="Catalina" defaultHost="localhost">
<Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Context docBase="/www/tomcatMax" path="" />
<-- 隐藏错误提示 -->
<Valve className="org.apache.catalina.valves.ErrorReportValve" showReport="false" showServerInfo="false" />
</Host>
</Engine>
</Service>
</Server>

tomcatThreadPool

1
2
3
4
5
6
7
8
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="200" minSpareThreads="150" maxIdleTime="20000"/>

<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
URIEncoding="UTF-8"
keepAliveTimeout="60000"
maxKeepAliveRequests="100"
executor="tomcatThreadPool" />

默认值

1
2
3
connectionTimeout = 20000
keepAliveTimeout = connectionTimeout
maxKeepAliveRequests = 100

Logging

参考

虚拟host配置

1
2
3
4
5
6
7
8
9
10
11
<Service name="Catalina">
<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" />
<Engine name="Catalina" defaultHost="xiongjiaxuan.com">
<Host name="xiongjiaxuan.com" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Context docBase="/www" path="" />
</Host>
<Host name="demo.xiongjiaxuan.com" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Context docBase="/www2" path="" />
</Host>
</Engine>
</Service>

多实例部署

配置和执行文件分开,tomcat安装目录仅留下bin和lib目录,instance目录下放原conf,新建bin,logs,webapps,work空目录

instance/bin 创建 start-instance.sh

1
2
3
export CATALINA_HOME=/usr/local/tomcat/tomcat7
export CATALINA_BASE=/usr/local/tomcat/instance
/usr/local/tomcat/tomcat7/bin/startup.sh

instance/bin 创建 stop-instance.sh

1
2
3
export CATALINA_HOME=/usr/local/tomcat/tomcat7
export CATALINA_BASE=/usr/local/tomcat/instance
/usr/local/tomcat/tomcat7/bin/shutdown.sh

参考

Performance

设置Jasper development为false

Is Jasper used in development mode? If true, the frequency at which JSPs are checked for modification may be specified via the modificationTestInterval parameter.

编辑 conf/web.xml

1
2
3
4
5
6
7
8
<servlet>
...
<init-param>
<param-name>development</param-name>
<param-value>false</param-value>
</init-param>
...
</servlet>

jconsole 远程监控

jconsole 192.168.137.59:9000

1
CATALINA_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9000 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=192.168.137.59"

Apache Kafka

历史

  • Open sourced in early 2011
  • Graduation from the Apache Incubator occurred on 23 October 2012

安装

下载

1
2
3
4
5
cd ~/download
wget https://archive.apache.org/dist/kafka/3.0.0/kafka_2.13-3.0.0.tgz
cd /usr/local
tar xzvf ~/download/kafka_2.13-3.0.0.tgz
mv kafka_2.13-3.0.0 kafka

启动

1
2
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties

后台运行

1
2
3
4
5
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
tail -f logs/zookeeper.out

bin/kafka-server-start.sh -daemon config/server.properties
tail -f logs/server.log

停止

1
2
bin/zookeeper-server-stop.sh
bin/kafka-server-stop.sh

注意

  • zookeeper默认jvm内存512M
  • kafka默认jvm内存1G

配置

编辑 config/server.properties

1
2
listeners=PLAINTEXT://内网ip:9092
advertised.listeners=PLAINTEXT://外网ip:9092

advertised.listeners 必须配置

读写 Topic

创建 topic

1
bin/kafka-topics.sh --create --topic foo --bootstrap-server ip:9092 --partitions 1 --replication-factor 1

写 topic

1
2
3
4
bin/kafka-console-producer.sh --topic foo --bootstrap-server ip:9092

# 使用 producer.properties 中的配置
bin/kafka-console-producer.sh --topic foo --producer.config=config/producer.properties --bootstrap-server ip:9092

读 topic

1
2
3
4
5
6
7
8
# 匿名 group id 从头开始消费
bin/kafka-console-consumer.sh --topic foo --from-beginning --bootstrap-server ip:9092

# 指定 group id
bin/kafka-console-consumer.sh --topic foo --consumer-property group.id=test1 --bootstrap-server ip:9092

# 使用 consumer.properties 中的配置
bin/kafka-console-consumer.sh --topic foo --consumer.config=config/consumer.properties --bootstrap-server ip:9092

查看状态

Topic 列表

1
bin/kafka-topics.sh --bootstrap-server ip:9092 --list

Topic 详情

1
bin/kafka-topics.sh --describe --topic foo --bootstrap-server ip:9092

Consumer 组列表

1
bin/kafka-consumer-groups.sh --list --bootstrap-server ip:9092

Consumer 组详情

1
bin/kafka-consumer-groups.sh --describe --group test --bootstrap-server ip:9092

Consumer 组成员列表

1
bin/kafka-consumer-groups.sh --describe --group test --members --bootstrap-server ip:9092

排错

Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member.

terminate the kafka environment

delete any data of your local Kafka environment including any events you have created along the way

1
rm -rf /tmp/kafka-logs /tmp/zookeeper

账号认证

编辑 server.properties

1
2
3
4
5
6
listeners=SASL_PLAINTEXT://ip:9092
advertised.listeners=SASL_PLAINTEXT://ip:9092

security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN

编辑 zookeeper.properties

1
2
3
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

编辑 producer.properties

1
2
3
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
compression.type=none

编辑 consumer.properties

1
2
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

编辑 zookeeper_jaas.conf

1
2
3
4
5
6
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="xiongjiaxuan"
user_admin="xiongjiaxuan";
};

编辑 kafka_server_jaas.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="xiongjiaxuan"
user_admin="xiongjiaxuan"
user_cms="cms";
};

Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="xiongjiaxuan";
};

认证方式启动

在zookeeper启动命令前或zookeeper-server-start.sh中添加

1
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/zookeeper_jaas.conf"

在kafka启动命令前或kafka-server-start.sh中添加

1
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf"

客户端测试

编辑 kafka_client_jaas.conf

1
2
3
4
5
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="xiongjiaxuan";
};

测试读写

1
2
3
4
export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_client_jaas.conf"

bin/kafka-console-producer.sh --topic foo --producer.config=config/producer.properties --bootstrap-server ip:9092
bin/kafka-console-consumer.sh --topic foo --consumer.config=config/consumer.properties --bootstrap-server ip:9092

Topic 列表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
编辑 config/config.properties

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="USER" password="PASSWORD";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

bin/kafka-topics.sh --list --bootstrap-server 8.210.141.55:9092 --command-config config/config.properties
bin/kafka-topics.sh --describe --topic cms-site --bootstrap-server 8.210.141.55:9092 --command-config config/config.properties

bin/kafka-consumer-groups.sh --list --bootstrap-server 8.210.141.55:9092 --command-config config/config.properties
bin/kafka-consumer-groups.sh --describe --group cms-article-consumer-dev --bootstrap-server 8.210.141.55:9092 --command-config config/config.properties
bin/kafka-consumer-groups.sh --describe --group cms-article-consumer-dev --members --bootstrap-server 8.210.141.55:9092 --command-config config/config.properties

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 8.210.141.55:9092 --topic foo --command-config config/config.properties

高级应用

Optimize

Kafka Rebalancing

Java 集成 Producer 端

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 100; i++)
producer.send(new ProducerRecord<String, String>("my-topic", Integer.toString(i), Integer.toString(i)));

producer.close();

启用事物

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("transactional.id", "my-transactional-id");
Producer<String, String> producer = new KafkaProducer<>(props, new StringSerializer(), new StringSerializer());

producer.initTransactions();

try {
producer.beginTransaction();
for (int i = 0; i < 100; i++)
producer.send(new ProducerRecord<>("my-topic", Integer.toString(i), Integer.toString(i)));
producer.commitTransaction();
} catch (ProducerFencedException | OutOfOrderSequenceException | AuthorizationException e) {
// We can't recover from these exceptions, so our only option is to close the producer and exit.
producer.close();
} catch (KafkaException e) {
// For all other exceptions, just abort the transaction and try again.
producer.abortTransaction();
}
producer.close();

参考

Java 集成 Consumer 端

自动提交 Offset

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("foo", "bar"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
}

手动提交 Offset

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test");
props.put("enable.auto.commit", "false");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("foo", "bar"));
final int minBatchSize = 200;
List<ConsumerRecord<String, String>> buffer = new ArrayList<>();
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
buffer.add(record);
}
if (buffer.size() >= minBatchSize) {
insertIntoDb(buffer);
consumer.commitSync();
buffer.clear();
}
}

参考

参考