Building Scalable Microservice Architecture with Next.js, Node.js, Redis
In today’s fast-paced world, building scalable microservice architecture is crucial. In this post, we explore the power of Next.js, Node.js, and Redis in creating a highly scalable and efficient system. Discover how leveraging Redis as a caching layer enables faster API responses and enhances database communication.
We delve into the benefits of caching and discuss how Redis improves performance and scalability in microservice architectures. Whether you’re a developer or architect, this article provides valuable insights into harnessing the potential of these technologies to build robust and scalable applications.
Overview:
Simple project with client app, a Node.js backend server, Redis worker server, providing insights into the caching data flow and message processing.
Here a very simple data flow example from the client-app to the database for GET requests, while also showcasing the efficient handling of POST requests involving time-consuming operations through the worker server.
Project Architecture:
The project follows a microservice architecture and consists of the following micro-projects:
- client-app: Built with Next.js and Tailwind CSS.
Handles the frontend interface and user interactions. - api-server: Developed with Node.js and uses a MySQL database.
Serves as the central backend server. Interacts with the MySQL database for data storage and retrieval. Integrates Redis for caching, enhancing performance. - worker-server: A separate server subscribed to the Redis pub/sub system. Handles asynchronous background tasks for POST requests.
Processes messages from the api-server, performs necessary operations, and stores data in the MySQL database.
This architecture enables scalability and separation of concerns. The client-app handles the frontend, the api-server processes requests and interacts with the database, while the worker-server efficiently handles background tasks. Redis integration enhances performance through caching.
Frontend Implementation with Next.js (“client-app”):
The “client-app” plays a vital role as the primary frontend application in our microservice architecture.
It interacts with the backend through API calls to the “api-server”, which communicates with a MySQL database.
Here, a POST request is sent to the “api-server”to create a data to the database :
// handle form submit list data
const handleSubmit = async () => {
await createTech(userInput);
const { isCached, data } = await getTech();
setTechList(data);
setIsCache(isCached);
};
export const createTech = async (text) => {
const resp = await fetch(`${apiUrl}/create`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ text }),
});
const data = await resp.json();
return data;
};
And a GET request to display the data from the database:
// handle refresh list data
const handleRefresh = async () => {
const { isCached, data } = await getTech();
setTechList(data);
setIsCache(isCached);
};
export const getTech = async () => {
const resp = await fetch(`${apiUrl}/get`);
const data = await resp.json();
return data;
};
The frontend web view may be like:
Backend Implementation with Node.js, Redis, and MySQL (“api-server”)
The “api-server” is a crucial component of our microservice architecture. Let’s explore its responsibilities and how it interacts with Redis and MySQL.
The api-server is responsible for handling incoming requests from the client-app and serves as the central backend server.
When a GET request is received, the api-server leverages Redis caching to optimize performance. It first checks the Redis cache for relevant data. If the data is found in the cache, the api-server can promptly respond to the client-app without querying the MySQL database. This caching mechanism significantly reduces response times and minimizes unnecessary database queries.
However, in cases where the cache is empty, the api-server queries the MySQL database, retrieves the requested data, and caches it in Redis for future use. This ensures that subsequent requests for the same data can be served directly from the cache, further enhancing response times and alleviating the database load.
Let’s take a look at code examples for the get routes in the api-server:
// '/get' get all lists
app.get("/get", async (_, res) => {
const cachedTechList = await getTechFromCache();
if (!cachedTechList) {
const list = await getListFromDB();
res.status(200).send({ isCached: false, data: list });
return await addTechToCache(list);
}
res.status(200).send({ isCached: true, data: cachedTechList });
});
The functions to add and get cache from redis server:
// add redis cache
export const addTechToCache = async (tech) => {
await redisClient.connect();
await redisClient.set(redisCacheName, JSON.stringify(tech));
await redisClient.disconnect();
};
// get cache from redis
export const getTechFromCache = async () => {
await redisClient.connect();
const cachedTechString = await redisClient.get(redisCacheName);
await redisClient.disconnect();
return JSON.parse(cachedTechString);
};
For POST requests, the api-server publishes a message to the Redis pub/sub system, which is then received and processed by the worker-server.
In the code snippets above, we demonstrate the implementation of the create routes in the api-server.
// '/create' create tech lists
app.post("/create", async (req, res) => {
try {
const { text } = req.body;
console.log("input text:", text);
await publishTech(text); // publish task to a redis channel
res.status(201).send({ message: "Tech List created successfully" });
} catch (error) {
console.log("create tech error==>", error);
res.status(500).send({ message: error?.message || "Server Error" });
}
});
Published to redis function:
// publish to a redis channel (worker-server)
export const publishTech = async (tech) => {
await redisClient.connect();
await redisClient.publish(redisChannel, tech);
await redisClient.disconnect();
};
This asynchronous processing allows the api-server to quickly acknowledge the client’s request while the worker-server handles the necessary background tasks efficiently.
The integration of Redis and MySQL in the api-server enhances performance, reduces database load, and ensures a smooth and scalable microservice architecture.
Asynchronous Background Processing with Redis Pub/Sub (“worker-server”):
The “worker-server” plays a crucial role in our microservice architecture by handling asynchronous background tasks. Let’s explore its purpose, functioning, and how it listens to the Redis pub/sub system to efficiently handle these tasks. We’ll also share code snippets demonstrating the implementation of the “worker-server.”
The worker-server is a separate server that subscribes to the Redis pub/sub system. It is designed to handle background tasks related to POST requests, providing efficient asynchronous processing. When a POST request involving data creation is received by the api-server, it publishes a message to the Redis pub/sub system and promptly responds positively to the client-app.
The worker-server, subscribed to the Redis pub/sub, receives this message and performs the necessary operations associated with the request. It ensures the smooth execution of background tasks such as data processing, validation, or any additional operations required before storing the data in the database.
Let’s take a look at a code snippet that demonstrates the implementation of the worker-server:
subscriber.on("ready", () => {
console.log("\n ✔ redis subscriber is ready.");
subscriber.subscribe(redisChannel, async (techName) => {
console.log(`\n techName from worker service: ${techName}`);
// call the task worker function listening from channel
await expensiveWorker(techName);
});
// the task worker from redis channel
export const expensiveWorker = async (techName) => {
try {
const techAnalysis = getTechPriority(techName);
await addListToDB(techAnalysis);
await deleteTechFromCache();
} catch (error) {
console.log("error", error);
}
};
In the code snippet above, we subscribe to the Redis pub/sub system and specifically listen to the ‘data-created’ channel. When a message is received on this channel, we process it accordingly. In this case, we parse the message as the created Data and perform the necessary operations related to data creation.
The worker-server’s asynchronous processing allows the api-server to quickly acknowledge the client’s request while offloading time-consuming tasks to be efficiently handled in the background. This improves the responsiveness of the microservice architecture and ensures smooth execution of operations.
In the next section, we will provide an overview of the entire data flow within our scalable microservice architecture and present a diagram to visualize the process.
Conclusion:
This project showcases a scalable microservice architecture utilizing Next.js, Node.js, and Redis. By leveraging Redis caching and pub/sub functionality, we achieve faster response times and efficient background processing. Implementing this architecture offers benefits such as improved performance, reduced database load, and seamless scalability. Check out the GitHub repository for code examples and further exploration.
GitHub Repository: Link to GitHub Repository
By adopting this microservice architecture, you can build highly scalable and responsive applications. Redis acts as a powerful caching solution, optimizing data retrieval and reducing the load on your MySQL database. The pub/sub mechanism enables asynchronous background processing, allowing your API server to quickly respond to client requests while offloading time-consuming tasks to the worker-server.
Feel free to explore the GitHub repository for the complete code and dive deeper into the implementation details. This scalable microservice architecture empowers you to build robust and efficient applications that can handle increased traffic and data processing demands.
Links and References
Thank you for joining me on this journey of building a scalable microservice architecture with Next.js, Node.js, and Redis.
Happy coding!