127.0.0.1:6379> hset k1 name 123 age 123 sex 123 email 123@123 (integer) 1 127.0.0.1:6379> memory usage k1 (integer) 96 127.0.0.1:6379> [root@CentOS7 ~]# redis-cli --bigkeys # Scanning the entire keyspace to find biggest keys as well as # average sizes per key type. You can use -i 0.1 to sleep 0.1 sec # per 100 SCAN commands (not usually needed). [00.00%] Biggest hash found so far '"k1"' with 4 fields -------- summary ------- Sampled 1 keys in the keyspace! Total key length in bytes is 2 (avg len 2.00) Biggest hash found '"k1"' has 4 fields 0 strings with 0 bytes (00.00% of keys, avg size 0.00) 0 lists with 0 items (00.00% of keys, avg size 0.00) 1 hashs with 4 fields (100.00% of keys, avg size 4.00) 0 streams with 0 entries (00.00% of keys, avg size 0.00) 0 sets with 0 members (00.00% of keys, avg size 0.00) 0 zsets with 0 members (00.00% of keys, avg size 0.00) [root@CentOS7 ~]# class user{ private String name; private Integer age; } user:1 {"name": "Jack", "age": 21} user:1:name Jack user:1:age 21 user:1 name jack age 21 假如有hash类型的key,其中有100万对field和value,field是自增id,这个key存在什么问题?如何优化?
| key | field | value |
|---|---|---|
| someKey | id:0 | value0 |
| … | … | |
| id:999999 | value999999 |
存在的问题:string结构底层没有太多内存优化,内存占用较多想要批量获取这些数据比较麻烦