4

I would like to know are there any packages in golang that will have expiry and efficient

I checked a few, Here is one of them, But from the implementation perspective it is locking entire cache to write one entry (Check this) which is not needed right?

Is it possible to lock one entry instead of locking entire cache?

2
  • 1
    Not to toot my own horn, but I wrote cmap just to work around that problem. Commented Apr 16, 2016 at 7:04
  • The sharded way of implementation looks good. But I want keys to have expiry also, which is not there in cmap Commented Apr 16, 2016 at 7:44

2 Answers 2

1

From the same repo you linked in your question, there is also an implementation of sharding strategy that should provide you with a lock per partition vs a lock for the whole cache. For example if you decide to partition using 4 caches, you can compute the modulo of some hash for the key and store in the cache at that index. Refining this same method, you could in theory shard using sub-partitions, decomposing the key thru a binary tree to give you the desired caching (and locking) granularity.

Sign up to request clarification or add additional context in comments.

2 Comments

Good catch!! I didn't see sharded implementation, But it is in experimental phase only :(
Well, just fork it, export symbols, use it, fix it and submit PR :)
0

It is not easy to lock only one entry, but you wanna more efficient, a good practice in Go is to use channel to communicate with a sequential process. In this way, there is no shared variables and locks.

a simple example of this:

type request struct {
    reqtype string
    key      string
    val      interface{}
    response chan<- result 
}

type result struct {
    value interface{}
    err   error
}


type Cache struct{ requests chan request }

func New() *Cache {
    cache := &Cache{requests: make(chan request)}
    go cache.server()
    return cache
}

func (c *Cache) Get(key string) (interface{}, error) {
    response := make(chan result)
    c.requests <- request{key, response}
    res := <-response
    return res.value, res.err
}

func (c *Cache) Set(key, val string) {
    c.requests <- request{"SET", key, val, response}
}

func (c *Cache) server() {
    cache := make(map[string]interface{})
    for req := range memo.requests {
        switch req.reqtype {
            case "SET":
                cache[req.key] = req.val
            case "GET":
                e := cache[req.key]
                if e == nil {
                    req.response <- result{e, errors.New("not exist")}
                } else {
                    req.response <- result{e, nil}
                }
        }
    }
}

2 Comments

While this works, it's even slower than using a sync.RWMutex
My experience with channels is that unless you have async tasks with some decent level of networking overhead (or local computation), their usage is not performant enough compared to memory locks. So it's back to mutex (and the spinlock that goes underneath its implementation.) YMMV. One recommendation I would advocate is to always put the mutex definition above the fields it locks in the struct.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.