-
-
Notifications
You must be signed in to change notification settings - Fork 428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does not work with Redis Cluster. #606
Comments
hello @helgi-reon, we currently do not support redis cluster, if you want to contribute and create a PR to support it I would be more than happy to review it. Otherwise I am afraid that this is going to the queue of new features |
Ok, good to know. I'll definitely contribute when and if I find a solution 😀 |
You will need to create a different client probably, and figure out what is the difference when using |
maybe I am into something... but could you try again? maybe this helps? redis/redis-py#2189 |
I managed to get the caching working in Django while using a Redis Cluster. It was achieved with a little trial and error approach so it most likely not an optimal solution. from redis import RedisCluster
class CustomRedisClient(RedisCluster):
def __init__(self, url, **kwargs):
client = RedisCluster.from_url(url)
kwargs["startup_nodes"] = client.get_nodes()
del kwargs["connection_pool"]
super().__init__(**kwargs) # settings.py
# REDIS_URL just needs to point to one of the nodes in the cluster e.g. redis://localhost:6379/0
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": REDIS_URL,
"OPTIONS": {
"REDIS_CLIENT_CLASS": "common.redis_client.CustomRedisClient",
"REDIS_CLIENT_KWARGS": {
"url": REDIS_URL,
},
"PARSER_CLASS": "redis.cluster.ClusterParser",
}
}
} The most interesting part is the deletion of the connection pool attribute. For some reason the connection pool made the correct number of connection nodes but all with the same port. That meant that most of the time it couldn't find a caching value as it was stored on a different node. |
nice! would you like to open a PR with cluster support? |
Hi there! I've been working to add cluster support to our project based on @helgi-reon approach. I figured RedisCluster should be initialized once per process to be able to have connection pooling, given that on redis-py This is the code I'm using for now (and on settings pointing import threading
from redis import RedisCluster
_lock = threading.Lock()
# _cluster is a process-global, as otherwise _cluster is cleared every time
# redis_cluster_factory is called, as Django creates new cache client
# instance for every request.
_cluster = None
def redis_cluster_factory(url, **kwargs):
# redis-django will pass us a connection pool on kwargs which we don't use
# given that connection pools are handled per node on RedisCluster.
global _cluster
if _cluster is None:
with _lock:
if _cluster is None:
_cluster = RedisCluster(url=url)
return _cluster @WisdomPill what do you thing would be the best route to contribute the redis cluster support to the project?
I don't think we can support different Another thing is that I would like to allow users to have cluster and non-cluster caches at the same time, for this I should move |
We tried out the implementations above and ran into issues when one of our Redis nodes were replaced. We're operating a redis cluster in kubernetes, so we're expecting this to happen regularly as pods are moved between nodes. Have you run into this, and do you have any thoughts on how to add failover support to the backend? |
@nicootto I would add a subclass or |
Hey friends, I've been picking around the corners of this as well while trying to migrate off of redis-py-cluster to instead use the I've been stealing/consolidating/learning from some of the ideas here, my current thoughts are over in this gist: https://gist.github.com/jonprindiville/f97084ca8f91501c17175a7b7a9578af There's a connection factory there, because the way that base The base (Aside: I do think that pluggable connection factory would be neat, as suggested by @nicootto, but maybe that's a separate feature, idk) Given this kind of a setup, my Django settings would be like the following:
If it helps, as a starting point, I could get a PR going for that stuff. There's probably gonna be large holes that need filling, though, before that amounts to good cluster support. Like, I'm not real sure how you'd want to go about adding cluster stuff to your CI processes, what kind of coverage to add, not sure about the
@sondrelg Hm, I would wonder if that might be more of a redis-py concern. I know that the clients there seem to have some amount of error recovery, e.g. retrying based on Perhaps tweaking some of the parameters at that level helps you recover from some types of situations? Like Without understanding all of what's going on inside of redis-py I wouldn't know what to suggest WRT adding failover on top at this layer.
@nicootto True, yes. But! I think that from the perspective of django-redis the The fact that inside of |
edit: it is quite possible this could be an issue on GCP's side as the we're running this in production over in https://github.com/grafana/oncall against a GCP Memorystore managed Redis Cluster. We've taken inspiration from this GitHub issue, as well as some conversations over in Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django_redis/client/default.py", line 258, in get
value = client.get(key)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/redis/commands/core.py", line 1829, in get
return self.execute_command("GET", name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/redis/cluster.py", line 1115, in execute_command
raise e
File "/usr/local/lib/python3.11/site-packages/redis/cluster.py", line 1101, in execute_command
res[node.name] = self._execute_command(node, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/redis/cluster.py", line 1144, in _execute_command
connection = get_connection(redis_node, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/redis/cluster.py", line 51, in get_connection
return redis_node.connection or redis_node.connection_pool.get_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/redis/connection.py", line 1086, in get_connection
connection.connect()
File "/usr/local/lib/python3.11/site-packages/redis/connection.py", line 279, in connect
self.redis_connect_func(self)
File "/usr/local/lib/python3.11/site-packages/redis/cluster.py", line 672, in on_connect
connection.on_connect()
File "/usr/local/lib/python3.11/site-packages/redis/connection.py", line 342, in on_connect
auth_response = self.read_response()
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/redis/connection.py", line 524, in read_response
raise response
redis.exceptions.ResponseError: Internal server error (code -1)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django_redis/cache.py", line 29, in _decorator
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django_redis/cache.py", line 99, in _get
return self.client.get(key, default=default, version=version, client=client)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django_redis/client/default.py", line 260, in get
raise ConnectionInterrupted(connection=client) from e
django_redis.exceptions.ConnectionInterrupted: Redis ResponseError: Internal server error (code -1) Our (simplified/abridged) setup is as follows:
|
@joeyorlando Hm, I'm afraid that I don't have any relevant experiences like that to share at the moment, sorry. I have given that code a bit of run in a staging environment, but I have not yet given it extended time in production against real traffic. Also: In my context I'm not interacting with GCP, I'm in AWS with an ElastiCache cluster (currently compatible with Redis 5.0.6, but I think soon we will move to a version compatible with Redis 7.) I think I may have to spend some time on this soon, though 😅 I think I'm blocked on something else until I can drop redis-py-cluster and update redis-py + django-redis. I think it'd be relatively easy to get a PR together for this existing gist/example stuff but the largest questions in my mind in the immediate future for django-redis to support redis-py's cluster client is WRT the django-redis test suite...
Oh, can you point at any relevant redis-py conversations? I'm not actively tracking anything on that side at the moment. |
if I can be of any help with a PR here, let me know! On a side note, turns out my cryptic Just figured I'd throw this ☝️ here in case anyone else in the community runs into the same issue. |
I would love to see a PR about support for redis cluster, I would use the same tactic we're currently using for sentinel (there's a script in tests/start_redis.sh), although I was trying to setup everything like it is done in redis-py with no success in #583. |
Problem Statement
Use
django-redis
with redis clustering.An error occurs when interacting with the cache.
The error points to a pickle operation on an instance of the
ConnectionPool
class where one of it's attributes, a thread lock, cannot be serialized and results in the following error:To Reproduce
Steps to reproduce the behavior:
REDIS_URL
points to.CACHES
in Django settings file.cache.get("somekey")
Expected behavior
The value of the key from the Redis cluster.
Stack trace
Environment:
Additional Context 🚨
This is my first time using this library and therefore not unlikely that my configurations are wrong.
The text was updated successfully, but these errors were encountered: