Native distributed architecture

Cluster now becomes the first class citizen of CabloyJS. In other words, the CabloyJS project is ready to be deployed in a clustered environment

The original Worker + Agent process model of EggJS is very convenient for a single machine. However, when it comes to multi machine clusters, especially Docker based cluster deployment, Agent process lose its usefulness. More importantly, if the development is based on the Agent process at the beginning, it is difficult to smoothly transition to the distributed scene later

Therefore, the backend of CabloyJS uses Redis. It starts from the bottom of the framework to design a native distributed architecture, and has derived a series of distributed development components, such as Broadcast, Queue, Schedule, Startup, which facilitates distributed development from the beginning. Therefore, after the system is scaled up, cluster expansion can be easily done

Redis Config

Since CabloyJS has three built-in running environments, we need to configure different Redis parameters for different running environments

Take notice: The following are the default configurations, which generally do not need to be changed. Just ensure that the host and port conform to the actual values

src/backend/config/config.{env}.js

  1. 1 // redis
  2. 2 const __redisConnectionDefault = {
  3. 3 host: '127.0.0.1',
  4. 4 port: 6379,
  5. 5 password: '',
  6. 6 db: 0,
  7. 7 };
  8. 8 const __redisConnectionDefaultCache = Object.assign({}, __redisConnectionDefault, {
  9. 9 keyPrefix: `cache_${appInfo.name}:`,
  10. 10 });
  11. 11 const __redisConnectionDefaultIO = Object.assign({}, __redisConnectionDefault, {
  12. 12 keyPrefix: `io_${appInfo.name}:`,
  13. 13 });
  14. 14
  15. 15 config.redisConnection = {
  16. 16 default: __redisConnectionDefault,
  17. 17 cache: __redisConnectionDefaultCache,
  18. 18 io: __redisConnectionDefaultIO,
  19. 19 };
  20. 20
  21. 21 config.redis = {
  22. 22 clients: {
  23. 23 redlock: config.redisConnection.default,
  24. 24 limiter: config.redisConnection.default,
  25. 25 queue: config.redisConnection.default,
  26. 26 broadcast: config.redisConnection.default,
  27. 27 cache: config.redisConnection.cache,
  28. 28 io: config.redisConnection.io,
  29. 29 },
  30. 30 };

The above codes defined multiple Redis client instances for different scenarios, such as: redlock, limiter, etc. But these client instances eventually point to the same Redis service

When the traffic increases, you can consider that different client instances point to different Redis services to share the system pressure

  • __redisConnectionDefault
Name Description
host IP Address
port Port
password Password
db database index, default is 0
  • config.redis.clients
Name Description
redlock Distributed lock
limiter Limiter
queue Queue
broadcast Broadcast
cache Cache
io socketio