Redis-LRU淘汰算法

Dcr 1年前 ⋅ 884 阅读

当Redis内存超出物理内存限制时,内存的数据会开始和磁盘产生频繁的交换.这会让Redis的性能急剧下降,对于访问量比较频繁的Redis来说,这样的存储速度基本上等于不可用.
在生产环境下是不允许出现交换行为的,为了限制最大使用内存,Redis提供了配置参数__maxmemory__来限制内存超出期望大小. 当实际内存超出__maxmemory__时,Redis提供了几种可选策略来让用户决定如何腾出空间继续提供读写服务.

  • noeviction 不会继续服务写请求(DEL 请求可以继续服务),读请求可以继续进行.这样保证不会丢失数据.默认淘汰策略
  • volatile-lru 尝试淘汰设置了过期时间的key,最少使用的key优先被淘汰.没有设置过期时间的key不会被淘汰,这样可以保证需要持久化的数据不会突然丢失
  • volatil-ttl 同上,除了淘汰的策略不是LRU,而是key的剩余寿命ttl的值,ttl越小越优先淘汰.
  • volatil-random 同上,不过淘汰的key是过期的key集合中随机的key
  • allkeys-lru 区别于volatile-lru,这个策略要淘汰的key对象是全体的key集合,而不只是过期的key集合.意味着没有设置过期时间的key也会被淘汰
  • allkeys-random 同上,不过淘汰的策略是随机的key.

volatil-xxx与allkeys-xxx区别

volatile-xxx 策略只会针对带过期时间的key进行淘汰,allkeys-xxx策略会对所有的key进行淘汰.如果只是拿Redis做缓存,那应该使用allkeys-xxx,客户端写缓存时不必带过期时间.但是如果还想同时使用Redis的持久化功能,那就使用volatile-xxx策略,这样可以保留没有设置过期时间的key.

LRU算法

LRU原理

实现LRU算法除了需要key/value字典外,还需要附加一个链表,链表中的元素按照一定顺序进行排列.当空间满的时候踢掉尾部元素.当字典元素被访问时,被访问元素移动至表头,所以冷数据就会被淘汰

Redis使用的是近似LRU算法

之所以不适用LRU算法,是因为需要消耗大量的额外内存,需要对现有的数据结构进行较大的改造.近似LRU算法则很简单,在现有的数据结构基础上使用随机采样法来淘汰元素,能达到和LRU算法非常相近的效果.Redis为实现近似LRU算法,它给每个key增加了一个额外的小字段,这个字段长度为24bit,也就是最后一次访问的时间戳. redis的lru淘汰处理方式只有懒惰处理,而不是集中处理和懒惰处理的混合处理,当redis执行写操作时,发现内存超出maxmemory时,就会执行一次lru淘汰.这个算法也很简单,就是随机采样出5(可配置)个key,然后淘汰掉最旧的key,如果淘汰后还是超出maxmemory,就继续随机采样,直到内存低于maxmemory为止. 至于如何采样就是看_maxmemory-policy_的配置了

java实现简单lru算法

这里用linkedHashMap实现一个简单的lru算法,理论上应该用链表结构配合的,我简单用数组代替实现
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.ReentrantLock;

public class LruCacheUtil {

    private final Logger logger = LoggerFactory.getLogger(LruCacheUtil.class);

    private Map<String,ReentrantLock> lock = new HashMap<String,ReentrantLock>();

    private final LinkedHashMap cache;

    private final float loadFactor;

    private String[] keys;

    private int loadFactorIndex;

    private int index = 0;

    private int lrusize;

    public LruCacheUtil(String methodName,int initSize,float loadFactor,int lrusize) {
        this.cache = new LinkedHashMap(initSize,loadFactor);
        this.loadFactor = loadFactor;
        this.keys = new String[initSize];
        if(loadFactor > 1)
            loadFactor = 0.8f;
        this.loadFactorIndex = (int) (initSize * loadFactor);
        this.lrusize = lrusize;
        this.lock.put(methodName,new ReentrantLock());
        logger.info("init LruCache cache={},loadFactor={},keyslenth={}",cache,loadFactor,keys.length);
    }

    public boolean cache(String methodName,String key,Object value) {
        if(index+1 >= loadFactorIndex)
            lruCache(methodName);
        if(cache.containsKey(key)) {
            cache.put(key,value);
        }
        ReentrantLock reentrantLock = this.lock.get(methodName);
        boolean flag = reentrantLock.tryLock();
        if(flag) {
            keys[index] = key;
            index ++;
        } else {
            return false;
        }
        reentrantLock.unlock();
        logger.info("cache k={},v={},cache={},lenth={}",key,value,cache,keys.length);
        /*for(String k : keys) {
            System.out.print(k+ " ");
        }*/
        return cache.put(key,value) == null;
    }

    public void lruCache(String methodName) {
        if(this.cache.size() <= this.loadFactorIndex) {
            logger.info("lrucache 未超过loadFactorIndex={}",this.loadFactorIndex);
            return;
        }
        int removeNum = 0;
        ReentrantLock reentrantLock = this.lock.get(methodName);
        boolean flag = reentrantLock.tryLock();
        logger.info("lruche before cache={}",cache);
        if(flag) {
            for (int i = keys.length-1; i >= 0 && removeNum < lrusize; i--) {
                if (keys[i] != null) {
                    removeNum++;
                    cache.remove(keys[i]);
                    keys[i] = null;
                }
            }
        }
        index -= removeNum;
        reentrantLock.unlock();
        logger.info("lruche after cache={}",cache);
    }

    public Object get(String methodName, String key) {
        if(cache.containsKey(key)) {
            addIndexFromKey(methodName,key);
            return cache.get(key);
        }
        return null;
    }

    private void addIndexFromKey(String methodName,String key) {
        logger.info("提升权重前");
        for(String k : keys) {
            System.out.print(k+ " ");
        }
        ReentrantLock reentrantLock = this.lock.get(methodName);
        boolean flag = reentrantLock.tryLock();
        if(flag) {
            int index;
            boolean next = true;
            for(int i =0 ;i < keys.length && next; i++) {
                if(keys[i].equals(key)) {
                    next = false;
                    index = i;
                    String zero = keys[0];
                    keys[0] = keys[index];
                    for(int j = 1; j<=index; j++) {
                        String tmp = keys[j];
                        keys[j] = zero;
                        zero = tmp;
                    }
                }
            }
        }
        reentrantLock.unlock();
        logger.info("提升权重后");
        for(String k : keys) {
            System.out.print(k+ " ");
        }
    }

    public static void main(String[] args) {
        LruCacheUtil lruCacheUtil = new LruCacheUtil("test",100,0.75f,20);
        lruCacheUtil.cache("test","a","a1");
        lruCacheUtil.cache("test","b","b1");
        lruCacheUtil.cache("test","c","c1");
        lruCacheUtil.cache("test","d","d1");
        lruCacheUtil.cache("test","e","e1");
       /* lruCacheUtil.cache("test","f","f1");
        lruCacheUtil.cache("test","g","g1");
        lruCacheUtil.cache("test","h","h1");
        lruCacheUtil.get("test","h");
        lruCacheUtil.get("test","c");
        lruCacheUtil.get("test","b");
        lruCacheUtil.cache("test","j","j1");
        lruCacheUtil.cache("test","k","k1");
        lruCacheUtil.cache("test","l","l1");
        lruCacheUtil.cache("test","m","m1");
        lruCacheUtil.cache("test","n","n1");
        lruCacheUtil.cache("test","o","o1");*/
    }

}

全部评论: 0

    我有话说: